CN111340784B - Mask R-CNN-based image tampering detection method - Google Patents
Mask R-CNN-based image tampering detection method Download PDFInfo
- Publication number
- CN111340784B CN111340784B CN202010122303.6A CN202010122303A CN111340784B CN 111340784 B CN111340784 B CN 111340784B CN 202010122303 A CN202010122303 A CN 202010122303A CN 111340784 B CN111340784 B CN 111340784B
- Authority
- CN
- China
- Prior art keywords
- image
- network
- tampered
- noise
- tampering detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 38
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 18
- 238000011176 pooling Methods 0.000 claims abstract description 15
- 238000003709 image segmentation Methods 0.000 claims abstract description 8
- 238000012360 testing method Methods 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims abstract description 7
- 238000013528 artificial neural network Methods 0.000 claims abstract description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000000034 method Methods 0.000 description 10
- 238000011156 evaluation Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004901 spalling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an improved Mask R-CNN image tampering detection method, which belongs to the technical field of image recognition and comprises the following steps: constructing an image tampering detection network based on Mask R-CNN; the image tampering detection network comprises a main branch network, a noise branch network, a Resnet-FPN backbone network, a regional proposal network RPN and a bilinear pooling ROI alignment network; inputting the tampered image into an image tampering detection network to perform feature combination on the input image classification features, noise features and tampering candidate region features, and outputting the classification, tampering region positioning and image segmentation results of the tampered image; training and testing the image tampering detection neural network by using the data set; and obtaining classification of tampered images, tampered region positioning and image segmentation mask prediction through the trained image tampering detection network. According to the invention, through the Mask R-CNN-based image tampering detection network, the tampered images are classified, tampered areas are positioned and the manipulation areas are segmented, so that the prediction of tampered image pixel levels is realized.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a Mask R-CNN-based image tampering detection method.
Background
The widespread adoption of high-resolution digital cameras and powerful digital image processing software has made falsifying pictures more realistic. Since digital images are easily tampered with, a series of false image event problems are caused, such as that a tamperer purposely tampers with the image, and the problems caused by the problems are caused to lose immeasurable when the images are used for judicial evidence obtaining, news reporting and medical authentication. Image stitching is one of the most common types of image forgery. The method comprises the steps of finding out two pixel points with sign features, and gradually changing the feature pixels in one image into the feature pixels in the other image by utilizing corresponding technical means.
Existing tamper detection methods can only infer whether a given image is counterfeit, but cannot locate both the stitching region and the segmentation mask (mask) region.
Therefore, there is an urgent need for an image falsification detection method capable of judging whether an image is falsified or not while giving a stitching region and a segmentation mask.
Disclosure of Invention
The invention aims to provide an image tampering detection method capable of judging whether an image is forged or not and giving a splicing area and a segmentation mask, which comprises the following steps:
a Mask R-CNN-based image tampering detection method comprises the following steps:
s10, constructing an image tampering detection network based on Mask R-CNN, wherein the image tampering detection network comprises a main branch network, a noise branch network, a Resnet-FPN backbone network, an RPN region proposal network and an ROI alignment bilinear pooling network;
s20, inputting the tampered image into the main branch network; the main branch network extracts tampered image characteristics and inputs the tampered image characteristics to the Resnet-FPN backbone network;
s30, extracting local noise characteristics of the tampered image through an SRM filter layer by the tampered image input into the main branch network; inputting the local noise characteristics into the noise branch network;
s40, the noise branch network identifies local noise characteristics and noise classification characteristics of the tampered image and inputs the local noise characteristics and the noise classification characteristics to the Resnet-FPN backbone network;
s50, generating an image feature pyramid through the FPN of the Resnet-FPN backbone network according to the input tampered image features, the local noise features and the noise classification features;
the image feature pyramid comprises a boundary feature pyramid, an image classification feature pyramid and an image noise feature pyramid;
s60, inputting the image classification feature pyramid and the image noise feature pyramid into the ROI alignment bilinear pooling network;
s70, inputting the boundary feature pyramid into the RPN region proposal network to generate image tampering candidate region features, and inputting the ROI alignment bilinear pooling network;
s80, the ROI alignment bilinear pooling network performs feature combination on the input image classification features, the image noise features and the image tampering candidate region features, and outputs classification, tampering region positioning and image segmentation results of the tampered images;
s90, training and testing the image tampering detection neural network by using the data set; the dataset creates a new tamper dataset (paspal VOC-TP) for a paspal VOC-based dataset synthesis; the new tampered data set (paspal VOC-TP) includes a tampered image, tampered region coordinate values, and a mask value of the tampered region;
s100, inputting the tampered images into the trained image tampering detection network to obtain classification of the tampered images, location of tampered areas and prediction of image segmentation masks.
Further, in step S30, the SRM filtering layer includes 3 basic filters, and kernels of the basic filters are:
further, in step S50, the image feature pyramid structure is [ P2, P3, P4, P5, P6], and for the ROI of w×h on the original image of the input network, the scale Pk of the selected suitable feature map is defined by the following formula:
wherein w×h represents the ROI area, K 0 Set to 4, 224 is ImageNet input image size.
Further, the RPN region proposed network corrects the boundary feature in step S70, and the RPN region proposed network correction loss is defined as:
wherein p is i Representing the predicted probability that an anchor i is a tampered region in one mini-batch,representing true values associated with positive anchor point i, t i ={t x ,t y ,t w ,t h -4 parameterized coordinates of prediction, < }>Is the true value coordinate corresponding to the positive anchor point; l (L) cls Representing cross entropy loss of RPN network, L reg Represents a smoothl1 loss; n (N) cls Representing the size, N, of mini-batch in RPN networks reg Representing the number of anchor points; lambda represents the hyper-parameter that balances these two losses.
Further, the ROI alignment bilinear pooling network model structure is shown in fig. 2.
Further, the tampered image is a three-channel (RGB) color image.
The invention has the beneficial effects that:
1) According to the invention, through the Mask R-CNN-based image tampering detection network, tampered images can be classified, the tampered image areas can be positioned and the manipulation areas can be segmented, so that prediction of tampered image pixel levels is realized.
2) Noise branches are added by using Mask R-CNN as a basic framework so as to distinguish noise inconsistencies of a real area and a tampered area and improve tamper detection precision.
3) A new tampered data set (PASCAL VOC-TP) is created based on the PASCAL VOC data set synthesis, the synthesized data set comprises a tampered image, tampered region coordinate values and a tampered region mask, and the problem of insufficient training of the tampered data of the neural network is solved from the source.
Drawings
FIG. 1 is a flow chart of a Mask R-CNN-based image tampering detection method
FIG. 2 bilinear pooling ROI alignment network model structure
FIG. 3 synthetic dataset sample
FIG. 4 synthetic dataset PASCAL VOC-TP AP comparison
FIG. 5 score comparison of F1 on two standard datasets
FIG. 6 sample of predicted results
FIG. 7 data enhancement and data non-enhancement F1 contrast in two data sets
FIG. 8 average AP value of the splice and copy-move technique in the present invention
Detailed Description
A Mask R-CNN-based image tampering detection method comprises the following steps:
s10, constructing an image tampering detection network based on Mask R-CNN, wherein the image tampering detection network comprises a main branch network, a noise branch network, a Resnet-FPN backbone network, a regional proposal network RPN and a bilinear pooling ROI alignment network;
s20, inputting a three-channel (RGB) color image falsified image into a main branch network; the main branch network extracts the tampered image characteristics and inputs the tampered image characteristics into the backbone network;
s30, extracting local noise characteristics of the tampered image of the input main branch network through the SRM filter layer; inputting local noise characteristics into a noise branch network;
the SRM filter layer includes 3 basic filters, and the kernel of the basic filters is:
s40, the noise branch network identifies local noise characteristics and noise classification characteristics of the tampered image, and inputs the local noise characteristics and the noise classification characteristics into the backbone network;
s50, generating an image feature pyramid through a backbone network FPN according to the input tampered image features, local noise features and noise classification features;
the image feature pyramids comprise boundary feature pyramids, image classification feature pyramids and image noise feature pyramids;
the image feature pyramid structure is [ P2, P3, P4, P5, P6], and for the w×h ROI on the original image of the input network, the scale Pk of the selected proper feature image is defined by the following formula:
wherein w×h represents the ROI area, K 0 Set to 4, 224 is ImageNet input image size.
S60, inputting an image classification feature pyramid and an image noise feature pyramid into a bilinear pooling ROI alignment network;
s70, inputting a boundary feature pyramid into a region proposal network RPN to generate image tampering candidate region features, and inputting a bilinear pooling ROI alignment network;
the regional proposal network RPN will correct the boundary features, and the RPN network correction loss can be defined as:
wherein p is i Representing the predicted probability that an anchor i is a tampered region in one mini-batch,representing true values associated with positive anchor point i, t i ={t x ,t y ,t w ,t h -4 parameterized coordinates of prediction, < }>Is the true value coordinate corresponding to the positive anchor point; l (L) cls Representing cross entropy loss of RPN network, L reg Represents a smoothl1 loss; n (N) cls Representing the size, N, of mini-batch in RPN networks reg Representing the number of anchor points; lambda represents the hyper-parameter that balances these two losses.
S80, carrying out feature combination on the input image classification features, the image noise features and the image tampering candidate region features by the bilinear pooling ROI alignment network, and outputting the classification of tampered images, the location of tampered regions and the image segmentation result;
s90, training and testing the image tampering detection neural network by using the data set; the dataset creates a new tamper dataset (paspal VOC-TP) for the paspal VOC-based dataset synthesis; the new tampered data set (PASCAL VOC-TP) includes a tampered image, tampered region coordinate values, and a mask value of the tampered region;
s100, the trained image tampering detection network is used for classifying tampered images, positioning tampered areas and predicting image segmentation masks.
Experimental test for this example:
experimental results will be provided in this example to demonstrate the effectiveness of the tamper detection algorithm of the present invention. The invention uses a dual-branch Mask R-CNN network, and utilizes noise branches to distinguish noise inconsistency of a real area and a tampered area. Therefore, the present invention needs to verify whether the tamper image detection accuracy of the dual branches of the main branch network and the noise branch is improved. All experiments were performed in Ubuntu 16.04 using NVidia GeForce GTX 1080 Ti.
1 Pre-training model
Because the presently disclosed tampered data sets are insufficient to train deep neural networks. To address this problem, the test experiments of the present invention used a paspal VOC dataset to synthesize 4 ten thousand pictures (paspal VOC-TP), with the training set and the test set divided in a 9:1 ratio. The generated data set includes a tampered image, tampered region coordinate values, and a mask value of the tampered region. The present invention pre-trains the model on the synthesized dataset. The evaluation was performed using average Accuracy (AP). In FIG. 4, the present invention can be seen that the Mask R-CNN of the present invention is superior to the Mask R-CNN of the prior art. Fig. 3 shows a synthetic data sample.
2 data set and evaluation
The method proposed by the present invention was compared to the prior art in the COVER and Columbia datasets. Because the COVER dataset is a copy-move focused dataset that hides the tampered image by pasting over similar objects with the same or similar objects. The Columbia dataset then focuses on the uncompressed image stitching technique. The two data sets provide the true mask tag so the two data sets are selected for evaluation.
The performance of the method and the prior method proposed by the invention is evaluated by using the evaluation indexes AP and F1 fraction. For each output, the threshold is changed and the best threshold that can output the highest F1 score is selected. The evaluation index F1 is defined as:
wherein I is out Representing the algorithm output mask, I gt Representing a real mask. TP represents the number of true positive pixels. FN represents the number of false negative pixels. FP represents the number of false positive pixels. True positive indicates that the prediction is a stitched pixel, actually a stitched pixel. False negative means that the predicted non-stitched pixels are actually stitched pixels. False positives represent pixels predicted to be stitched, and actually non-stitched pixels.
3 comparison of three experiments
The algorithm provided by the invention is compared with the existing tamper localization algorithm. Using the implementation of these existing methods in Matlab toolbox. The Coverage and Columbia datasets were evaluated, respectively. Clearly, the proposed method is superior to the existing baseline method in terms of F1 score, and F1 score is also improved compared to Mask R-CNN in the prior art. The evaluation results are shown in fig. 5, and the prediction result is shown in fig. 6, for example.
In data enhancement, the invention respectively performs two groups of experimental comparison. The first group did not perform any data enhancement operation, the second group turned the image with a probability of 0.5, and fig. 7 shows the comparison result of the experiment, and it was found that the best effect was obtained in the case of using image turning.
4 tamper technique detection
In order to analyze the network structure provided by the invention and detect different tampering technologies, the invention modifies the prediction category of the network, and the category is respectively modified into Splicing (spalling) and copy-move (copy-move). The network of the invention can detect multi-category tampering technology. Fig. 8 shows the AP scores after the category change.
The foregoing description is only a preferred embodiment of the present invention, and is not intended to limit the technical scope of the present invention, so any minor modifications, equivalent changes and modifications made to the above embodiments according to the technical principles of the present invention still fall within the scope of the technical solutions of the present invention.
Claims (5)
1. The image tampering detection method based on the mask-CNN is characterized by comprising the following steps of:
s10, constructing an image tampering detection network based on a mask-CNN, wherein the image tampering detection network comprises a main branch network, a noise branch network, a Resnet-FPN backbone network, an RPN area proposal network and a ROIAlign bilinear pooling network;
s20, inputting a tampered image into the main branch network; the main branch network extracts tampered image characteristics and inputs the tampered image characteristics to the Resnet-FPN backbone network;
s30, extracting local noise characteristics of the tampered image through an SRM filter layer by the tampered image input into the main branch network; inputting the local noise characteristics into the noise branch network;
s40, the noise branch network identifies local noise characteristics and noise classification characteristics of the tampered image and inputs the local noise characteristics and the noise classification characteristics to the Resnet-FPN backbone network;
s50, generating an image feature pyramid through the FPN of the Resnet-FPN backbone network according to the input tampered image features, the local noise features and the noise classification features;
the image feature pyramid comprises a boundary feature pyramid, an image classification feature pyramid and an image noise feature pyramid;
s60, inputting the image classification feature pyramid and the image noise feature pyramid into the ROI alignment bilinear pooling network;
s70, inputting the boundary feature pyramid into the RPN area proposal network to generate image tampering candidate area features, and inputting the ROIALign bilinear pooling network;
s80, the ROIALign bilinear pooling network performs feature combination on the input image classification features, image noise features and the image tampering candidate region features, and outputs classification, tampering region positioning and image segmentation results of the tampered images;
s90, training and testing the image tampering detection neural network by using the data set; the dataset creates a new tampered dataset (pascaloc-TP) for a pascaloc-based dataset synthesis; the new tampered data set (pascaloc-TP) includes a tampered image, tampered region coordinate values, and a mask value of the tampered region;
s100, inputting the tampered images into the trained image tampering detection network to obtain classification of the tampered images, location of tampered areas and prediction of image segmentation masks.
3. the image tampering detection method as defined in claim 1, wherein in step S50, the image feature pyramid structure is [ P2, P3, P4, P5, P6], and the scale Pk of the selected suitable feature map is defined by the following formula for the ROI of w×h on the original image of the input network:
wherein w×h represents the ROI area, K 0 Set to 4, 224 is ImageNet input image size.
4. The image tamper detection method of claim 1, wherein the RPN region proposed network corrects the boundary feature in step S70, and wherein the RPN region proposed network correction loss is defined as:
wherein p is i Representing the predicted probability that an anchor i is a tampered region in one mini-batch,representing true values associated with positive anchor point i, t i ={t x ,t y ,t w ,t h -4 parameterized coordinates of prediction, < }>Is the true value coordinate corresponding to the positive anchor point; l (L) cls Representing cross entropy loss of RPN network, L reg Represents smoothL1 loss; n (N) cls Representing the size, N, of mini-batch in RPN networks reg Representing the number of anchor points; lambda represents the hyper-parameter that balances these two losses.
5. The image tampering detection method as defined in claim 1, wherein said tampered image is a three-channel (RGB) color image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010122303.6A CN111340784B (en) | 2020-02-25 | 2020-02-25 | Mask R-CNN-based image tampering detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010122303.6A CN111340784B (en) | 2020-02-25 | 2020-02-25 | Mask R-CNN-based image tampering detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111340784A CN111340784A (en) | 2020-06-26 |
CN111340784B true CN111340784B (en) | 2023-06-23 |
Family
ID=71187089
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010122303.6A Active CN111340784B (en) | 2020-02-25 | 2020-02-25 | Mask R-CNN-based image tampering detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111340784B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111915568B (en) * | 2020-07-08 | 2023-07-25 | 深圳大学 | Image tampering positioning model generation method, image tampering positioning method and device |
CN112102261A (en) * | 2020-08-28 | 2020-12-18 | 国网甘肃省电力公司电力科学研究院 | Multi-scale generation-based tamper image detection method for anti-network |
CN112149727A (en) * | 2020-09-22 | 2020-12-29 | 佛山科学技术学院 | Green pepper image detection method based on Mask R-CNN |
CN112270268A (en) * | 2020-10-29 | 2021-01-26 | 重庆邮电大学 | Fruit picking robot target detection method based on deep learning in unstructured environment |
CN112508039B (en) * | 2020-12-08 | 2024-04-02 | 中国银联股份有限公司 | Image detection method and device |
CN112580647A (en) * | 2020-12-11 | 2021-03-30 | 湖北工业大学 | Stacked object oriented identification method and system |
CN113537235A (en) * | 2021-02-08 | 2021-10-22 | 中国石油化工股份有限公司 | Rock identification method, system, device, terminal and readable storage medium |
CN113077484B (en) * | 2021-03-30 | 2023-05-23 | 中国人民解放军战略支援部队信息工程大学 | Image instance segmentation method |
CN113239788A (en) * | 2021-05-11 | 2021-08-10 | 嘉兴学院 | Mask R-CNN-based wireless communication modulation mode identification method |
CN113989234A (en) * | 2021-10-28 | 2022-01-28 | 杭州中科睿鉴科技有限公司 | Image tampering detection method based on multi-feature fusion |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609947A (en) * | 2012-02-10 | 2012-07-25 | 浙江理工大学 | Forgery detection method for spliced and distorted digital photos |
WO2019192397A1 (en) * | 2018-04-04 | 2019-10-10 | 华中科技大学 | End-to-end recognition method for scene text in any shape |
CN110349136A (en) * | 2019-06-28 | 2019-10-18 | 厦门大学 | A kind of tampered image detection method based on deep learning |
-
2020
- 2020-02-25 CN CN202010122303.6A patent/CN111340784B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609947A (en) * | 2012-02-10 | 2012-07-25 | 浙江理工大学 | Forgery detection method for spliced and distorted digital photos |
WO2019192397A1 (en) * | 2018-04-04 | 2019-10-10 | 华中科技大学 | End-to-end recognition method for scene text in any shape |
CN110349136A (en) * | 2019-06-28 | 2019-10-18 | 厦门大学 | A kind of tampered image detection method based on deep learning |
Non-Patent Citations (1)
Title |
---|
基于深度神经网络的图像修复取证算法;朱新山等;《光学学报》;20181110(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111340784A (en) | 2020-06-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340784B (en) | Mask R-CNN-based image tampering detection method | |
Wang et al. | Detection and localization of image forgeries using improved mask regional convolutional neural network | |
Chen et al. | A serial image copy-move forgery localization scheme with source/target distinguishment | |
Chang et al. | A forgery detection algorithm for exemplar-based inpainting images using multi-region relation | |
CN110852316B (en) | Image tampering detection and positioning method adopting convolution network with dense structure | |
CN109145745B (en) | Face recognition method under shielding condition | |
Jaiprakash et al. | Low dimensional DCT and DWT feature based model for detection of image splicing and copy-move forgery | |
CN112907598B (en) | Method for detecting falsification of document and certificate images based on attention CNN | |
AlSawadi et al. | Copy-move image forgery detection using local binary pattern and neighborhood clustering | |
Thajeel et al. | A Novel Approach for Detection of Copy Move Forgery using Completed Robust Local Binary Pattern. | |
CN111476727B (en) | Video motion enhancement method for face-changing video detection | |
Zhang et al. | No one can escape: A general approach to detect tampered and generated image | |
Dixit et al. | Copy-move forgery detection exploiting statistical image features | |
CN117197763A (en) | Road crack detection method and system based on cross attention guide feature alignment network | |
CN115393698A (en) | Digital image tampering detection method based on improved DPN network | |
CN114998261A (en) | Double-current U-Net image tampering detection network system and image tampering detection method thereof | |
Quidu et al. | Mine classification using a hybrid set of descriptors | |
Lafuente-Arroyo et al. | Traffic sign classification invariant to rotations using support vector machines | |
CN113537173B (en) | Face image authenticity identification method based on face patch mapping | |
CN113850284B (en) | Multi-operation detection method based on multi-scale feature fusion and multi-branch prediction | |
CN112396638A (en) | Image processing method, terminal and computer readable storage medium | |
CN115965987A (en) | Table character structured recognition method based on heterogeneous architecture | |
CN113012167B (en) | Combined segmentation method for cell nucleus and cytoplasm | |
CN108364256A (en) | A kind of image mosaic detection method based on quaternion wavelet transformation | |
CN117558011B (en) | Image text tampering detection method based on self-consistency matrix and multi-scale loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |