CN112215780B - Image evidence obtaining and resistance attack defending method based on class feature restoration fusion - Google Patents

Image evidence obtaining and resistance attack defending method based on class feature restoration fusion Download PDF

Info

Publication number
CN112215780B
CN112215780B CN202011175112.2A CN202011175112A CN112215780B CN 112215780 B CN112215780 B CN 112215780B CN 202011175112 A CN202011175112 A CN 202011175112A CN 112215780 B CN112215780 B CN 112215780B
Authority
CN
China
Prior art keywords
image
pixel
classification
restoration
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011175112.2A
Other languages
Chinese (zh)
Other versions
CN112215780A (en
Inventor
陈晋音
陈若曦
蒋焘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202011175112.2A priority Critical patent/CN112215780B/en
Publication of CN112215780A publication Critical patent/CN112215780A/en
Application granted granted Critical
Publication of CN112215780B publication Critical patent/CN112215780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image evidence obtaining and resistance attack defending method based on class feature restoration fusion, which comprises the following steps: inputting an original image containing disturbance resistance into a classification model constructed based on a convolutional neural network, carrying out feature-like visualization on the extracted feature map, and carrying out feature restoration on a reconstructed region after obtaining the reconstructed region based on a visualization result to obtain a restored image; denoising an original image to obtain a denoised image, fusing a repair image and the denoised image to obtain a fused image, selecting a plurality of classification areas with high image quality according to pixel distribution characteristics of the fused image, inputting the classification areas into a classification model, and taking the highest classification probability output by the classification model as a class mark after defense. The method can improve the robustness of the deep learning model and accurately restore evidence obtaining information.

Description

Image evidence obtaining and resistance attack defending method based on class feature restoration fusion
Technical Field
The invention relates to the field of data security, in particular to an image evidence obtaining and resistance attack defense method based on class feature restoration fusion.
Background
With rapid development of electronic technology, digital cameras and image scanning devices are rapidly popularized, and digital images are widely applied to daily offices, study and life of people, and are widely applied to key fields such as media propaganda, military information, judicial identification, scientific discovery and the like. In recent years, due to the superior performance and stable performance of the deep learning model, the deep learning model is applied to the field of image recognition and plays an important role in image evidence collection.
But deep learning models are subject to resistive perturbations that can be misjudged. Resistance attack algorithms in the computer vision field, such as FGSM, deepFool, BIM, JSMA, etc., can make deep learning models misclassified. At the same time, the source camera fingerprint of the image is forged by a specific technology, and the evidence-taking algorithm, such as the resistance generation network GAN and WGAN, can be deceived. With the inundation and simplification of digital image falsification and counterfeiting technologies, the anti-evidence-taking technology makes the authenticity and authority of images face a great challenge, and unexpected harm is caused to national, military and social security. Therefore, the security in the field of image evidence obtaining, especially the security of the image evidence obtaining technology based on deep learning, attracts attention, and is also important to the defense measures against attacks in the image evidence obtaining.
Chen et al use a simple deep learning model to pre-process the input image for median filtering, remove the effect of image content on detection performance, amplify the image noise signal to detect if the image has been tampered with median filtering, see Jiansheng Chen Median filtering forensics based on convolutional neural networks.2015 for details. Bayer et al developed a constrained convolution layer that was able to jointly suppress the content of the image and adaptively learn the operation detection characteristics, with good detection for tampering operations on various images, as detailed in Belhassen Bayar, A Deep Learning Approach To Universal Image Manipulation Detection Using A New Convolutional Layer2016.Marra et al explored a GAN-generated image detection scheme based on photoresponse non-uniqueness to determine whether an input image is GAN-generated, see Marra, F.; gragniniello, d.; verdoliva, l.; poggi, g.do GANs leave artificial fingerprints arXiv2018.
However, the prior art has at least the following drawbacks and disadvantages:
(1) These defense methods can only detect whether an image is tampered or counterfeit, but cannot restore fingerprint information and class marks of the original image device.
(2) These defensive methods have difficulty in distinguishing the resistance disturbance from the image device fingerprint information, and often cannot achieve a good evidence obtaining effect when facing the resistance attack.
(3) The robustness of the deep learning model to the resistance attack is not improved.
Disclosure of Invention
In order to overcome the defects that the prior image evidence obtaining method cannot restore the fingerprint information of the original image equipment and is difficult to defend against the resistance attack, the invention provides the image evidence obtaining resistance attack defending method based on class feature restoration fusion, which can improve the robustness of a deep learning model and accurately restore the evidence obtaining information.
The technical scheme adopted for solving the technical problems is as follows:
an image evidence obtaining and resistance attack defending method based on class feature restoration fusion comprises the following steps:
inputting an original image into a classification model, carrying out class feature visualization on the extracted feature map, and carrying out feature restoration on a reconstructed region after the reconstructed region is obtained based on a visualization result to obtain a restored image;
denoising an original image to obtain a denoised image, fusing a repair image and the denoised image to obtain a fused image, selecting a plurality of classification areas with high image quality according to pixel distribution characteristics of the fused image, inputting the classification areas into a classification model, and taking the highest classification probability output by the classification model as a class mark after defense.
Compared with the prior art, the invention has the beneficial effects that at least:
according to the image evidence obtaining and resistance attack defending method based on class feature restoration fusion, the class feature visualization and the acquisition of the reconstruction area can successfully separate the resistance disturbance from the device fingerprint of the image under the condition that the traceable fingerprint is not damaged, so that disturbance filtering is realized. And repairing and fusing the class characteristics, and correctly restoring class labels of the countermeasure sample while retaining most equipment fingerprints. The method of region selection and classification voting is also applied during recognition, and the accuracy of class labels after defense is further ensured. The experimental result on the real image data set shows that the algorithm has good applicability and accuracy, can effectively restore equipment information, outputs corresponding original class labels, and has good defending effect on attack resistance.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image evidence obtaining and resistance attack defending method based on class feature restoration fusion provided by an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the detailed description is presented by way of example only and is not intended to limit the scope of the invention.
Fig. 1 is a flowchart of an image evidence obtaining and resistance attack defending method based on class feature restoration fusion provided by an embodiment. Referring to fig. 1, the image evidence obtaining and resistance attack defending method based on class feature restoration fusion provided by the embodiment includes the following steps:
step 1, initializing, namely taking an original image containing disturbance resistance as a resistance sample, unifying the resistance sample into the same size, and inputting the same into a classification model constructed based on a convolutional neural network.
And step 2, performing class feature visualization on the feature map and finding out a reconstruction region.
For an original image, first, the first n classes with the largest classification probability of the input original image are calculated. In this experiment, n is taken as 6.
In an embodiment, grad-CAM is used to visualize features. The Grad-CAM workflow is as follows: given a challenge sample and a class of interest as inputs, the confidence score and class are calculated through the forward propagation mechanism of the neural network, through the CNN layer of the model. All other classes are set to 0 except for the desired class set to 1. Defining the weight of the mth feature map corresponding to the c-th category in Grad-CAM asCalculated by the following formula:
wherein Z represents the total number of pixels of the mth feature map, y c A score gradient representing the c-th category,representing the pixel value at position (i, j) in the mth feature map;
and calculating the weight of the feature class object according to the formula (1) by taking each class of the first n classes as a target class. Weights and pixel values for class c corresponding to all feature mapsWeighted summation to get thermodynamic diagram ++for the c-th category at position (i, j)>
Thermodynamic diagramAs a result of the visualization, the thermal region represents the region of interest, from green to red, characteristic of the classification model for the input image for the c-th categoryThe importance is sequentially increased. So far, the region with the greatest influence on the current image classification result can be found, and the region is also the region needing to be reconstructed.
When a reconstruction region is selected, detecting the highest scoring local maximum value p from a thermodynamic diagram, avoiding overlapping detection by using a local maximum value search algorithm, setting a neighborhood of the local maximum value p as w, setting all image pixel values taking the selected local maximum value p as the center in a (2w+1) x (2w+1) region to be 0, and leaving other pixel points unchanged, wherein the specific formula is as follows:
wherein,representing the pixel value of position (i, j) in the thermodynamic diagram, position (i) m ,j m ) Representing the position of the selected local maximum p, (i, j) representing the arbitrary pixel position, |·| representing taking the absolute value;
and (3) processing the thermodynamic diagrams corresponding to the n categories, wherein the pixel point set with 0 is used as a reconstruction region, and M=1 corresponding to the reconstruction region with 0 in the thermodynamic diagrams and M=0 corresponding to other pixel regions.
And 3, repairing the reconstruction region.
In this embodiment, the original image is used to perform feature restoration on the reconstruction region with the pixel value of 0, which specifically includes:
setting two pixel threshold values gamma 1 And gamma 2 Wherein gamma is 1 <γ 2 Traversing pixel points in the original image, wherein the pixel value is [ gamma ] 12 ]The pixel points in the pixel are not processed, and the pixel value is [ gamma ] 12 ]The other pixel points are subjected to variation processing, so that the pixel point x needing the variation processing is required o The neighborhood pixel point matrix around the pixel point matrix is as follows:
then image is likePixel x o The variation of (2) is as follows:
wherein w is e ,w n ,w w ,w s Respectively represent pixel points x e ,x n ,x w ,x s For pixel x o Neighborhood pixel point x of (2) e ,x n ,x w ,x s Coefficient of variation w e ,w n ,w w ,w s Obtaining the pixel point after the repairing treatment by taking the weighted average value
For pixel values at [ gamma ] 12 ]All the other pixels are subjected to variation processing to obtain repaired pixels, and the repaired pixels are combined with the pixel values in [ gamma ] 12 ]All pixels within the image constitute a repair image.
And 4, fusing the repair image and the original image.
In order to reduce the influence of the resistance disturbance in the repair image and the original image on tracing, the repair image and the original image are fused, and the specific process is as follows: denoising an original image by adopting a wavelet transformation denoising technology to obtain a denoising image, and then fusing a repair image and the denoising image by adopting the following formula to obtain a fused image:
wherein I is out The fused image is represented by a representation of the fused image,representing the pixel value at position (i, j) in the repair image,/or->Representing the pixel value at position (i, j) in the denoised image, when M ij Reconstruction region set to 0 in the table thermodynamic diagram, M ij When M is =1 ij Other pixel regions in the table thermodynamic diagram, M ij =0。
According to the fusion process of the formula (10), the pixel point of 0 in the restored image can be replaced by the denoising image. The fusion process is to fuse the class features and the class irrelevant features to obtain a fused image fused with the class features and the class irrelevant features.
And 5, selecting a classification area.
Because the region with low-quality texture and low-frequency component does not contain important information of image equipment tracing, in order to reduce the influence of the low-quality region on the image tracing, before the image is input to a classification model, the classification region selection is needed for the image, and a plurality of classification regions with high image quality can be selected according to the pixel distribution characteristics of the fused image, and the process is as follows:
normalizing the pixel value of the fused image to be between 0 and 1, selecting a plurality of candidate classification areas with the size of M multiplied by N from the fused image, and calculating the pixel average value and the pixel standard deviation of each color channel of each candidate classification area:
wherein k is used as a color channel index, the values of k are 1,2 and 3, R, G and B color channels are respectively represented,represents the average value of the pixels of the kth channel, represents the standard deviation of the pixels of the kth channel, B ijk Representing the pixel value at the kth channel position (i, j);
for average pixel valuesAnd performing nonlinear transformation twice on the pixel standard deviation, wherein the formula is as follows:
wherein the pixel average valueThe non-linear transformation of (a) emphasizes the average value of the pixels, thus giving a better score for brighter and unsaturated areas>The nonlinear transformation of the pixel standard deviation emphasizes higher values, providing higher fractional score (σ) for regions of high texture k );
Score for three channels per candidate classification regionAnd score (sigma) k ) Weighted summation is carried out to obtain a candidate classification region score (p):
the higher the candidate classification region score (p), the higher the candidate classification region image quality, and the more beneficial the device fingerprint information contained is to classification.
And 6, obtaining the classification probability of the original image according to the classification region.
In this embodiment, a plurality of classification areas with high image quality are selected and input into the classification model, and the highest classification probability output by the classification model is used as the class mark after defense. Specifically, for each target category, the candidate classification region score (p) is ranked from high to low, the candidate classification region 10-15 high before ranking is extracted as the input of the classification model, and the highest classification probability corresponding to the candidate classification region is selected as the classification probability of the final classification model, so that the defending against the attack is realized.
In the image evidence obtaining and resistance attack defense method based on class feature restoration fusion provided by the embodiment, the class feature visualization and the acquisition of the reconstruction area can successfully separate the resistance disturbance and the device fingerprint of the image under the condition of not damaging the traceable fingerprint, so that the disturbance filtering is realized. And repairing and fusing the class characteristics, and correctly restoring class labels of the countermeasure sample while retaining most equipment fingerprints. The method of region selection and classification voting is also applied during recognition, and the accuracy of class labels after defense is further ensured. The experimental result on the real image data set shows that the algorithm has good applicability and accuracy, can effectively restore equipment information, outputs corresponding original class labels, and has good defending effect on attack resistance.
The foregoing detailed description of the preferred embodiments and advantages of the invention will be appreciated that the foregoing description is merely illustrative of the presently preferred embodiments of the invention, and that no changes, additions, substitutions and equivalents of those embodiments are intended to be included within the scope of the invention.

Claims (7)

1. The image evidence obtaining and resistance attack defending method based on class feature restoration fusion is characterized by comprising the following steps of:
inputting an original image containing disturbance resistance into a classification model constructed based on a convolutional neural network to extract features, carrying out feature-like visualization on the extracted feature image, and carrying out feature restoration on a reconstruction region after obtaining the reconstruction region based on a visualization result to obtain a restoration image;
denoising an original image to obtain a denoised image, fusing a repair image and the denoised image to obtain a fused image, selecting a plurality of classification areas with high image quality according to pixel distribution characteristics of the fused image, inputting the classification areas into a classification model, and taking the highest classification probability output by the classification model as a class mark after defending, wherein the process of selecting the plurality of classification areas with high image quality according to the pixel distribution characteristics of the fused image is as follows:
normalizing the pixel value of the fused image to be between 0 and 1, selecting a plurality of candidate classification areas with the size of M multiplied by N from the fused image, and calculating the pixel average value and the pixel standard deviation of each color channel of each candidate classification area:
wherein k is used as a color channel index, the values of k are 1,2 and 3, R, G and B color channels are respectively represented,representing the pixel average value, sigma, of the kth channel k Represents the standard deviation of the pixels of the kth channel, B ijk Representing the pixel value at the kth channel position (i, j);
average value of pixelsSum pixel standard deviation sigma k And carrying out nonlinear transformation, wherein the formula is as follows:
wherein the pixel average valueThe non-linear transformation of (a) emphasizes the average value of the pixels, thus giving a better score for brighter and unsaturated areas>Standard deviation sigma of pixel k Is emphasized by the higher values, providing higher fractional score (σ) for regions of high texture k );
Score for three channels per candidate classification regionAnd score (sigma) k ) Weighted summation is carried out to obtain a candidate classification region score (p):
the higher the candidate classification region score (p), the higher the candidate classification region image quality, and the more beneficial the device fingerprint information contained is to classification.
2. The method for defending against a sexual attack based on image evidence obtaining based on class feature restoration fusion as set forth in claim 1, wherein the process of performing class feature visualization on the extracted feature map is as follows:
defining the weight of the mth feature map corresponding to the c category asCalculated by the following formula:
wherein Z represents the total number of pixels of the mth feature map, y c A score gradient representing the c-th category,representing the pixel value at position (i, j) in the mth feature map;
weights and pixel values for class c corresponding to all feature mapsWeighted summation to get thermodynamic diagram ++for the c-th category at position (i, j)>
Thermodynamic diagramAs a result of the visualization, the thermal region represents the region of interest of the classification model for the input image for category c, with the color increasing in order from green to red, with the importance of the feature.
3. The method for defending against a sexual attack based on image evidence obtaining based on class feature restoration fusion according to claim 1, wherein the process of obtaining the reconstructed area based on the visualized result is as follows:
detecting the highest scoring local maximum value p from the thermodynamic diagram, avoiding overlapping detection by using a local maximum value search algorithm, setting the neighborhood of the local maximum value p as w, setting all image pixel values taking the selected local maximum value p as the center in a (2w+1) x (2w+1) region to 0, and leaving other pixel points unchanged, wherein the specific formula is as follows:
wherein,representing the pixel value of position (i, j) in the thermodynamic diagram, position (i) m ,j m ) Representing the position of the selected local maximum p, (i, j) representing the arbitrary pixel position, |·| representing taking the absolute value;
the pixel point set with 0 is used as a reconstruction region, and M=1 corresponding to the reconstruction region with 0 is set in the thermodynamic diagram, and M=0 corresponding to other pixel regions.
4. The method for defending against a sexual attack based on image evidence obtaining based on similar feature restoration fusion as set forth in claim 1, wherein the feature restoration is performed on a reconstruction region with a pixel value of 0 by using an original image, and the specific process is as follows:
setting two pixel threshold values gamma 1 And gamma 2 Wherein gamma is 1 <γ 2 Traversing pixel points in the original image, wherein the pixel value is [ gamma ] 12 ]The other pixel points are subjected to variation processing, so that the pixel point x needing the variation processing is required o The neighborhood pixel point matrix around the pixel point matrix is as follows:
pixel point x o The variation of (2) is as follows:
wherein w is e ,w n ,w w ,w s Respectively represent pixel points x e ,x n ,x w ,x s For pixel x o Neighborhood pixel point x of (2) e ,x n ,x w ,x s Coefficient of variation w e ,w n ,w w ,w s Obtaining the pixel point x 'after the restoration processing by taking the weighted average value' o
For pixel values at [ gamma ] 12 ]All the other pixels are subjected to variation processing to obtain repaired pixels, and the repaired pixels are combined with the pixel values in [ gamma ] 12 ]All pixels within the image constitute a repair image.
5. The method for defending against a sexual attack based on image evidence obtaining based on class feature restoration fusion according to claim 1, wherein the denoising process is performed on the original image by adopting a wavelet transform denoising technology to obtain a denoised image.
6. The method for defending against a sexual attack based on image evidence obtaining based on class feature restoration fusion according to claim 1, wherein the restoration image and the denoising image are fused by adopting the following formula to obtain a fused image:
wherein I is out The fused image is represented by a representation of the fused image,representing the pixel value at position (i, j) in the repair image,/or->Representing the pixel value at position (i, j) in the denoised image, when M ij Reconstruction region set to 0 in the table thermodynamic diagram, M ij When M is =1 ij Other pixel regions in the table thermodynamic diagram, M ij =0。
7. The method for defending against the attack by the image evidence obtaining based on the class feature restoration fusion according to claim 1, wherein the score (p) of the candidate classification area is ranked from high to low for each target class, the candidate classification area 10-15 high before the ranking is extracted as the input of the classification model, and the highest classification probability corresponding to the candidate classification area is selected as the classification probability of the final classification model, so that the defending against the attack is realized.
CN202011175112.2A 2020-10-28 2020-10-28 Image evidence obtaining and resistance attack defending method based on class feature restoration fusion Active CN112215780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011175112.2A CN112215780B (en) 2020-10-28 2020-10-28 Image evidence obtaining and resistance attack defending method based on class feature restoration fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011175112.2A CN112215780B (en) 2020-10-28 2020-10-28 Image evidence obtaining and resistance attack defending method based on class feature restoration fusion

Publications (2)

Publication Number Publication Date
CN112215780A CN112215780A (en) 2021-01-12
CN112215780B true CN112215780B (en) 2024-03-19

Family

ID=74057365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011175112.2A Active CN112215780B (en) 2020-10-28 2020-10-28 Image evidence obtaining and resistance attack defending method based on class feature restoration fusion

Country Status (1)

Country Link
CN (1) CN112215780B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115428435A (en) * 2021-01-29 2022-12-02 富士胶片株式会社 Information processing device, imaging device, information processing method, and program
CN113792789B (en) * 2021-09-14 2024-03-29 中国科学技术大学 Class-activated thermodynamic diagram-based image tampering detection and positioning method and system
CN115937994B (en) * 2023-01-06 2023-05-30 南昌大学 Data detection method based on deep learning detection model

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101674397A (en) * 2009-09-27 2010-03-17 上海大学 Repairing method of scratch in video sequence

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101674397A (en) * 2009-09-27 2010-03-17 上海大学 Repairing method of scratch in video sequence

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising;Puneet Gupta等;2019 IEEE/CVF International Conference on Computer Vision (ICCV);20200227;第6708-6717页 *
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization;Ramprasaath R. Selvaraju等;https://arxiv.org/abs/1610.02391;20191203;第1-23页 *

Also Published As

Publication number Publication date
CN112215780A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
Wu et al. Deep matching and validation network: An end-to-end solution to constrained image splicing localization and detection
CN112215780B (en) Image evidence obtaining and resistance attack defending method based on class feature restoration fusion
Walia et al. Digital image forgery detection: a systematic scrutiny
Wu et al. Busternet: Detecting copy-move image forgery with source/target localization
Kim et al. Median filtered image restoration and anti-forensics using adversarial networks
Mahmood et al. Copy‐move forgery detection technique for forensic analysis in digital images
Mushtaq et al. Digital image forgeries and passive image authentication techniques: a survey
Abidin et al. Copy-move image forgery detection using deep learning methods: a review
CN111079816A (en) Image auditing method and device and server
CN110334622B (en) Pedestrian retrieval method based on adaptive feature pyramid
Thajeel et al. A Novel Approach for Detection of Copy Move Forgery using Completed Robust Local Binary Pattern.
Zhang et al. No one can escape: A general approach to detect tampered and generated image
Hilles et al. Latent fingerprint enhancement and segmentation technique based on hybrid edge adaptive dtv model
CN111476727B (en) Video motion enhancement method for face-changing video detection
Dixit et al. Copy-move forgery detection exploiting statistical image features
Nowroozi et al. Detecting high-quality GAN-generated face images using neural networks
Mani et al. A survey on digital image forensics: Metadata and image forgeries
CN111259792A (en) Face living body detection method based on DWT-LBP-DCT characteristics
Abdulqader et al. Detection of tamper forgery image in security digital mage
Saealal et al. Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance
Aydin Comparison of color features on copy-move forgery detection problem using HSV color space
Kumar et al. Syn2real: Forgery classification via unsupervised domain adaptation
Yohannan et al. Detection of copy-move forgery based on Gabor filter
Chaitra et al. Digital image forgery: taxonomy, techniques, and tools–a comprehensive study
CN114140674B (en) Electronic evidence availability identification method combined with image processing and data mining technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant