CN112686249B - Grad-CAM attack method based on anti-patch - Google Patents

Grad-CAM attack method based on anti-patch Download PDF

Info

Publication number
CN112686249B
CN112686249B CN202011528278.8A CN202011528278A CN112686249B CN 112686249 B CN112686249 B CN 112686249B CN 202011528278 A CN202011528278 A CN 202011528278A CN 112686249 B CN112686249 B CN 112686249B
Authority
CN
China
Prior art keywords
image
patch
countermeasure
confrontation
grad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011528278.8A
Other languages
Chinese (zh)
Other versions
CN112686249A (en
Inventor
屈丹
司念文
张文林
常禾雨
罗向阳
魏雪娟
郝朝龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Zhengzhou Xinda Institute of Advanced Technology
Original Assignee
Information Engineering University of PLA Strategic Support Force
Zhengzhou Xinda Institute of Advanced Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force , Zhengzhou Xinda Institute of Advanced Technology filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202011528278.8A priority Critical patent/CN112686249B/en
Publication of CN112686249A publication Critical patent/CN112686249A/en
Application granted granted Critical
Publication of CN112686249B publication Critical patent/CN112686249B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a Grad-CAM attack method based on anti-patch. The method comprises the following steps: step 1: initializing disturbance z, and generating a countermeasure patch on an input image x according to a set binarization mask m to obtain a countermeasure image x' containing the countermeasure patch; step 2: generating a saliency map of the challenge image x' using the Grad-CAM method; and step 3: calculating a loss function; wherein the optimization objective of the loss function comprises: matching the class of the confrontation image x' with the original class of the input image x; guiding the saliency map of the countermeasure image x' to be biased towards the region of the countermeasure patch; and 4, step 4: updating the disturbance z by using the loss function obtained by calculation to generate a new confrontation image x'; and 5: and (5) repeating the steps 2 to 4 until the set iteration number is reached, and taking the confrontation image x' at the moment as a final confrontation image.

Description

Grad-CAM attack method based on anti-patch
Technical Field
The invention relates to the technical field of interpretable deep learning, in particular to a Grad-CAM attack method based on anti-patch.
Background
As a very typical deep Neural Network model, interpretability and intelligibility of a Convolutional Neural Network (CNN) have important research values. The gradient-based class activation mapping (Grad-CAM) method is a good CNN interpretable method, and realizes the interpretation of the CNN decision result by highlighting the characteristic region related to the specific decision result through a saliency map.
The CNN interpretation method based on the saliency map is important for understanding the characterization and decision of CNN, and also relates to the quality of downstream tasks based on interpretability. However, recent studies have shown that such a saliency map based interpretation method is facing the risk of being attacked, and that the interpretation result of a saliency map can be always biased to a specific region by elaborated countermeasure samples, achieving an invalid interpretation even directed to intentional bias.
Document 4 ("gharbani, a, ABID, a, & zu, j.interpretation of Neural Networks is framework [ C ]. AAAI 2019: Third-Third AAAI Conference on therefor, 33(1)," 3681-3688, 2018. ") first verifies that BP (reference 1" SIMONYAN K, VEDALDI a, zissman. deep semiconductor Networks: visual analysis protocols [ J ]. arxivpirnprep, arXiv:1312.6034,2013. ") and Integrated graphics (reference 3" sports yam ", tale a, n q. audio analysis documents [ C ]. express vulnerability ] can be used to maximize the two methods of image vulnerability and image vulnerability by using a map of the two methods of image vulnerability and image vulnerability. Document 5 ("DOMBROWSKI A K, ALBER M, ANDERS C, et al. ExPLANTIONS can be manipulated and geometries to blank [ C ]. In NeurIPS 2019: third-third Conference on Neural Information Processing Systems, pp.13589-13600,2019.") further studies that interpretation features specified In The results of methods such as BP, Guided BP (reference 2 "SPRINGENBERG, TOBIAS J, DOSOVITSKIY, et al, marking. guiding for similarity: The net connected volume [ J ]. arXivPrepriarXiv: 1412.6806,2014.") and previously Integrated graphics can be made irrelevant under The constraint of specific loss functions. Document 6 ("HEO J, JOO S, MOON t. food Neural Network interpretation view adaptive Model management [ C ]. In neuroips 2019: third-third Conference on Neural Information Processing Systems, pp.2925-2936,2019.") implements an attack on the Grad-CAM interpretation result by fine-tuning the Model parameters against resistance, using the Model after parameter fine-tuning to guide the Grad-CAM interpretation result always toward a specific region without modifying the input image. In summary, documents 4 and 5 propose attack methods in which results are specifically interpreted by generating a challenge sample whose visual change is imperceptible. Although the confrontation sample has good camouflage characteristics, the confrontation sample is difficult to apply in reality. Meanwhile, documents 4 and 5 do not study the attack against the Grad-CAM method. Although document 6 does not need to add a perturbation to form a countermeasure sample, and can implement an attack on the Grad-CAM method, the adopted parameter tuning method needs to retrain the model, so that the cost of the attack is also large.
Disclosure of Invention
Aiming at the problems that the existing attack method is difficult to apply in reality and is not suitable for Grad-CAM attack or the attack cost is high, the invention provides a Grad-CAM attack method based on anti-patch.
The invention provides a Grad-CAM attack method based on anti-patch, which comprises the following steps:
step 1: initializing disturbance z, and generating a countermeasure patch on an input image x according to a set binarization mask m to obtain a countermeasure image x' containing the countermeasure patch;
step 2: generating a saliency map of the challenge image x' using the Grad-CAM method;
and step 3: calculating a loss function; wherein the optimization objective of the loss function comprises: matching the class of the confrontation image x' with the original class of the input image x; guiding the saliency map of the countermeasure image x' to be biased towards the region of the countermeasure patch;
and 4, step 4: updating the disturbance z by using the loss function obtained by calculation to generate a new confrontation image x';
and 5: and (5) repeating the steps 2 to 4 until the set iteration number is reached, and taking the confrontation image x' at the moment as a final confrontation image.
Further, in step 1, the mask value of the area where the countermeasure patch is located is set to 1, and the mask values of the remaining areas are set to 0.
Further, in step 1, a countermeasure image x' containing a countermeasure patch is obtained according to formula (1):
x′=x⊙(1-m)+z⊙m (1)
where m denotes a binarization mask, and |, denotes a Hadamard product.
Further, step 2 comprises:
calculate the logits score against image x':
(S1,…,Sc,…,SN)=f(x′;θ) (2)
wherein f represents the target network to be attacked, theta represents the weight parameter of the target network f, and ScAnd (3) representing the output score of the class c of the target network f, wherein N represents the total class number, and c is more than or equal to 1 and less than or equal to N.
Further, in step 2, a saliency map of the confrontation image x' is generated according to formula (3)
Figure BDA0002851306030000031
Figure BDA0002851306030000032
Wherein A iskA K channel of the highest-level feature map extracted from the input image x by the target network f is represented, wherein K is 1,2,3 …, and K represents the number of channels of the highest-level feature map;
Figure BDA0002851306030000033
represents the weight of the k channel; c denotes the original class of the input image x.
Further, in step 3, an objective function designed to make the class of the confrontation image x' and the original class of the input image x coincide is formula (4):
Figure BDA0002851306030000034
wherein c represents the original category of the input image, f represents the target network to be attacked, theta represents the weight parameter of the target network f, z represents the disturbance, lossCERepresenting a cross entropy loss function.
Further, in step 3, an objective function for guiding the saliency map of the robust image x' to be biased to the region where the robust patch is located is designed as formula (5):
Figure BDA0002851306030000035
wherein the content of the first and second substances,
Figure BDA0002851306030000036
the pixel in the ith row and the jth column of the region of the saliency map where the countermeasure patch is located is represented.
Further, in step 3, a Loss function Loss is calculated according to the formula (6):
Figure BDA0002851306030000038
where λ represents a harmonic parameter between the two optimization objectives.
Further, in step 4, the disturbance z is updated according to the formula (7):
Figure BDA0002851306030000037
wherein z' represents the updated perturbation; lr represents a learning rate at the time of update; sign represents a sign function, and the value range is { +1, -1 }; loss represents a Loss function;
Figure BDA0002851306030000041
a gradient operator is represented.
The invention has the beneficial effects that:
the method for Grad-CAM attack based on the anti-patch can generate the anti-patch and synthesize the anti-image aiming at the Grad-CAM interpretation result of the target image, guide the Grad-CAM to be biased to the patch area under the condition of ensuring that the image classification result is not changed, and realize the attack on the model interpretation result by optimizing the loss function. In addition, the invention can also generate the universal countermeasure patch by using a batch training method so as to obtain the generalized universal countermeasure patch and realize Grad-CAM attack on other pictures of the same category.
Drawings
Fig. 1 is a schematic flow chart of a method for resisting a patch-based Grad-CAM attack according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating the generation of a saliency map using Grad-CAM according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a comparison of Grad-CAM attack results on VGGNet-19-bn models provided by embodiments of the present invention;
fig. 4 is a diagram illustrating comparison of the Grad-CAM attack results on 4 different models according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An anti-Patch (adaptive Patch) is a Patch image used to attack a neural network image classification system. By adding the countercheck patch on the target image, the target image can be misclassified by the neural network classification model, and the attack on the image classification system is realized. The anti-patch has the advantages of being not limited by the disturbance norm, being capable of realizing large disturbance in a small area, having the advantage of strong practical applicability, and being generally used for anti-attack of a real scene. The invention provides a Grad-CAM attack method based on anti-patch, which uses the anti-patch method for an attack interpretation method instead of the prediction of an attack model.
Example 1
As shown in fig. 1, the present invention provides a method for resisting Grad-CAM attack based on patches, which includes:
s101: initializing disturbance z, and generating a countermeasure patch on an input image x according to a set binarization mask m to obtain a countermeasure image x' containing the countermeasure patch;
specifically, the position of the anti-patch addition can be controlled by the binary mask m, so that the guide position of the Grad-CAM in the subsequent process is controlled.
S102: generating a saliency map of the challenge image x' using the Grad-CAM method;
s103: calculating a loss function; wherein the loss function comprises two optimization objectives, which are respectively: matching the class of the confrontation image x' with the original class of the input image x; guiding the saliency map of the countermeasure image x' to be biased towards the region of the countermeasure patch;
s104: updating the disturbance z by using the loss function obtained by calculation to generate a new confrontation image x';
s105: and (5) repeating the steps S102 to S104 until the set iteration number is reached, and taking the confrontation image x' at the moment as a final confrontation image.
The Grad-CAM attack method based on the anti-patch provided by the embodiment of the invention can generate the anti-patch and synthesize the anti-image aiming at the Grad-CAM interpretation result of the target image, and attack the Grad-CAM interpretation method on the basis of not changing the image classification result, so that the position of the salient region of the target image cannot be accurately positioned, and errors occur. It can be seen that the purpose of the embodiment of the present invention using the countermeasure patch is not to mislead the image classification result, but to be used as an interpretation result of the attack interpretation method.
Example 2
On the basis of the above embodiments, the embodiment of the present invention provides another method for resisting a Grad-CAM attack based on a patch, which includes the following steps:
s201: initializing disturbance z, and generating a countermeasure patch on an input image x according to a set binarization mask m to obtain a countermeasure image x' containing the countermeasure patch;
specifically, the mask value of the area where the countermeasure patch is located is set to 1, and the mask values of the remaining areas are set to 0; then, a countermeasure image x' containing a countermeasure patch is obtained according to the formula (1):
x′=x⊙(1-m)+z⊙m (1)
where m denotes a binarization mask, and |, denotes a Hadamard product.
S202: generating a saliency map of the challenge image x' using the Grad-CAM method;
specifically, as shown in fig. 2, in the class activation graph series method, the Grad-CAM method adopted in this embodiment has a simple implementation process, is general to various networks, and has a good visualization effect. The method comprises the following steps:
first, the logits score for the confrontation image x' is calculated:
(S1,…,Sc,…,SN)=f(x′;θ) (2)
wherein f represents the target network to be attacked, theta represents the weight parameter of the target network f, and ScAnd (3) representing the output score of the class c of the target network f, wherein N represents the total class number, and c is more than or equal to 1 and less than or equal to N.
Then, a saliency map of the antagonistic image x' is generated according to equation (3)
Figure BDA0002851306030000061
Figure BDA0002851306030000062
Wherein A iskA K channel of the highest-level feature map extracted from the input image x by the target network f is represented, wherein K is 1,2,3 …, and K represents the number of channels of the highest-level feature map;
Figure BDA0002851306030000063
represents the weight of the k channel; c denotes the original class of the input image x. Wherein the channel weight is derived from the derivative of a certain class, and therefore the channel weight contains class discrimination information.
S203: calculating a loss function; wherein the loss function comprises two optimization objectives, which are respectively: matching the class of the confrontation image x' with the original class of the input image x; guiding the saliency map of the countermeasure image x' to be biased towards the region of the countermeasure patch;
specifically, the classification result of the antagonistic image using the cross entropy loss constraint in the present embodiment remains unchanged, that is: first, an objective function that makes the class of the countermeasure image x' and the original class of the input image x coincide is designed as formula (4):
Figure BDA0002851306030000064
wherein c represents the original category of the input image, f represents the target network to be attacked, theta represents the weight parameter of the target network f, z represents the disturbance, lossCERepresenting a cross entropy loss function;
then, an objective function for guiding the saliency map of the confrontation image x' to bias towards the region where the confrontation patch is located is designed as formula (5):
Figure BDA0002851306030000065
wherein the content of the first and second substances,
Figure BDA0002851306030000066
a pixel of the ith row and the jth column of the area of the saliency map where the anti-patch is located;
specifically, in this embodiment, the saliency map of the saliency image x' is biased to the region where the countermeasure patch is located by summing pixels of the region where the countermeasure patch is located in the saliency map and maximizing the summed value by updating the disturbance.
And finally, integrating the objective functions of the two optimization targets, and calculating according to a formula (6) to obtain a Loss function Loss:
Figure BDA0002851306030000073
where λ represents a harmonic parameter between the two optimization objectives.
S204: updating the disturbance z by using the loss function obtained by calculation to generate a new confrontation image x';
specifically, the disturbance z is updated according to equation (7):
Figure BDA0002851306030000071
wherein z' represents the updated perturbation; lr represents a learning rate at the time of update; sign represents a sign function, and the value range is { +1, -1 }; loss represents a Loss function;
Figure BDA0002851306030000072
a gradient operator is represented.
S205: and repeating the steps S202 to S204 until the set iteration number is reached, and taking the confrontation image x' at the moment as a final confrontation image.
According to the Grad-CAM attack method based on the countermeasure patch, the countermeasure patch is generated in a targeted manner, the countermeasure image is synthesized, the Grad-CAM interpretation result is guided to be biased to the patch area under the condition that the image classification result is not changed, and the attack on the interpretation result is finally realized through optimizing the loss function.
In addition, the method and the device can also generate the universal countermeasure patch by using a batch training method so as to obtain the generalized universal countermeasure patch and realize Grad-CAM attack on other pictures of the same category.
In order to verify the effectiveness of the method for resisting Grad-CAM attack based on patches, the invention also provides the following experimental data.
As shown in fig. 3, the visualization results of an example image randomly selected from the ILSVRC2012 verification set and its challenge image are shown. Column 1 represents the input image, subscripts in the form of (class, classification probability); column 2 is Grad-CAM of the input image, subscript (ER)p,ERb) A value; column 3 is a confrontation image generated by processing an input image by the method of the present invention, with subscripts in the form of (category, classification probability); column 4 is Grad-CAM for the confrontational image, subscript (ER)p,ERb) The value is obtained. The ER value represents the ratio of the energy of a certain region in the saliency map to the energy of the whole saliency map. Wherein, ERpEnergy Ratio (ER) representing the area of an image patchbAn Energy value (Energy Ratio) of the picture frame area is represented. The larger the ER value, the higher the Grad-CAM is interested in this area. As can be seen from the Grad-CAM image of the resisting image, the method has good attack effect and can guide the positioning of the target to be biased to the upper left corner patch area.
As shown in fig. 4, a comparison of visualization results of example images is shown, including two input images (the input image is "junco" and is another input image "espresso"), a confrontation image of the two input images, and a Grad-CAM saliency map corresponding to the input image and the confrontation image, the 1 st column being the input image junco, the 2 nd column being the Grad-CAM of the input image junco, the 3 rd column being the confrontation image junco generated for the input image, the 4 th column being the Grad-CAM of the confrontation image junco, the 5 th column being the input image espresso, the 6 th column being the Grad-CAM of the input image espresso, the 7 th column being the confrontation image generated for the input image, and the 8 th column being the Grad-CAM of the confrontation image espresso. The results show that the method has better attack effect on Grad-CAM explanation of 4 networks, namely VGGNet-16, VGGNet-19-bn, ResNet-50 and DenseNet-161.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. A method for Grad-CAM attack based on anti-patch is characterized by comprising the following steps:
step 1: initializing disturbance z, and generating a countermeasure patch on an input image x according to a set binarization mask m to obtain a countermeasure image x' containing the countermeasure patch;
step 2: generating a saliency map of the challenge image x' using the Grad-CAM method;
and step 3: calculating a loss function; wherein the optimization objective of the loss function comprises: matching the class of the confrontation image x' with the original class of the input image x; guiding the saliency map of the countermeasure image x' to be biased towards the region of the countermeasure patch; the method specifically comprises the following steps:
the objective function designed so that the class of the confrontation image x' coincides with the original class of the input image x is formula (4):
Figure FDA0003345728980000011
wherein c represents the original category of the input image, f represents the target network to be attacked, theta represents the weight parameter of the target network f, z represents the disturbance, lossCERepresenting a cross entropy loss function;
designing an objective function which guides the saliency map of the countermeasure image x' to be biased to the region where the countermeasure patch is located as formula (5):
Figure FDA0003345728980000012
wherein the content of the first and second substances,
Figure FDA0003345728980000013
a pixel of the ith row and the jth column of the area of the saliency map where the anti-patch is located;
combining the formulas (4) and (5) to obtain a final loss function formula (6), and guiding the saliency map of the confrontation image x 'to be biased to the region where the confrontation patch is located under the condition that the category of the confrontation image x' is consistent with the original category of the input image x:
Figure FDA0003345728980000014
wherein λ represents a harmonic parameter between the two optimization objectives;
and 4, step 4: updating the disturbance z by using the loss function obtained by calculation and generating a new confrontation image x';
and 5: and (5) repeating the steps 2 to 4 until the set iteration number is reached, and taking the confrontation image x' at the moment as a final confrontation image.
2. The method as claimed in claim 1, wherein in step 1, the mask value of the region where the countermeasure patch is located is set to 1, and the mask values of the remaining regions are set to 0.
3. The method according to claim 1, wherein in step 1, a countermeasure image x' containing a countermeasure patch is obtained according to formula (1):
x′=x⊙(1-m)+z⊙m (1)
where m denotes a binarization mask, and |, denotes a Hadamard product.
4. The method of claim 1, wherein step 2 comprises:
calculate the logits score against image x':
(S1,…,Sc,…,SN)=f(x′;θ) (2)
wherein f represents the target network to be attacked, theta represents the weight parameter of the target network f, and ScOutput score representing class c of target network fThe value, N, represents the total number of classes, 1 ≦ c ≦ N.
5. Method according to claim 4, characterized in that in step 2, a saliency map of said contrast image x' is generated according to equation (3)
Figure FDA0003345728980000021
Figure FDA0003345728980000022
Wherein A iskA K channel of the highest-level feature map extracted from the input image x by the target network f is represented, wherein K is 1,2,3 …, and K represents the number of channels of the highest-level feature map;
Figure FDA0003345728980000023
represents the weight of the k channel; c denotes the original class of the input image x.
6. A method according to claim 1, characterized in that in step 4, the disturbance z is updated according to equation (7):
Figure FDA0003345728980000024
wherein z' represents the updated perturbation; lr represents a learning rate at the time of update; sign represents a sign function, and the value range is { +1, -1 }; loss represents a Loss function;
Figure FDA0003345728980000025
a gradient operator is represented.
CN202011528278.8A 2020-12-22 2020-12-22 Grad-CAM attack method based on anti-patch Active CN112686249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011528278.8A CN112686249B (en) 2020-12-22 2020-12-22 Grad-CAM attack method based on anti-patch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011528278.8A CN112686249B (en) 2020-12-22 2020-12-22 Grad-CAM attack method based on anti-patch

Publications (2)

Publication Number Publication Date
CN112686249A CN112686249A (en) 2021-04-20
CN112686249B true CN112686249B (en) 2022-01-25

Family

ID=75450532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011528278.8A Active CN112686249B (en) 2020-12-22 2020-12-22 Grad-CAM attack method based on anti-patch

Country Status (1)

Country Link
CN (1) CN112686249B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436073B (en) * 2021-06-29 2023-04-07 中山大学 Real image super-resolution robust method and device based on frequency domain

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273870A (en) * 2017-07-07 2017-10-20 郑州航空工业管理学院 The pedestrian position detection method of integrating context information under a kind of monitoring scene
CN110009679A (en) * 2019-02-28 2019-07-12 江南大学 A kind of object localization method based on Analysis On Multi-scale Features convolutional neural networks
CN110188795A (en) * 2019-04-24 2019-08-30 华为技术有限公司 Image classification method, data processing method and device
CN112085069A (en) * 2020-08-18 2020-12-15 中国人民解放军战略支援部队信息工程大学 Multi-target countermeasure patch generation method and device based on integrated attention mechanism
CN112364915A (en) * 2020-11-10 2021-02-12 浙江科技学院 Imperceptible counterpatch generation method and application

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11568183B2 (en) * 2019-05-26 2023-01-31 International Business Machines Corporation Generating saliency masks for inputs of models using saliency metric

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273870A (en) * 2017-07-07 2017-10-20 郑州航空工业管理学院 The pedestrian position detection method of integrating context information under a kind of monitoring scene
CN110009679A (en) * 2019-02-28 2019-07-12 江南大学 A kind of object localization method based on Analysis On Multi-scale Features convolutional neural networks
CN110188795A (en) * 2019-04-24 2019-08-30 华为技术有限公司 Image classification method, data processing method and device
CN112085069A (en) * 2020-08-18 2020-12-15 中国人民解放军战略支援部队信息工程大学 Multi-target countermeasure patch generation method and device based on integrated attention mechanism
CN112364915A (en) * 2020-11-10 2021-02-12 浙江科技学院 Imperceptible counterpatch generation method and application

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Adversarial Patch";Tom B. Brown等;《arXiv》;20180517;第1-6页 *
"Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization";Ramprasaath R. Selvaraju等;《arXiv》;20191206;第1-23页 *

Also Published As

Publication number Publication date
CN112686249A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN109214327B (en) Anti-face recognition method based on PSO
US20230186056A1 (en) Grabbing detection method based on rp-resnet
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN108710893B (en) Digital image camera source model classification method based on feature fusion
CN113674140A (en) Physical countermeasure sample generation method and system
CN110852393A (en) Remote sensing image segmentation method and system
CN112233129B (en) Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device
CN110111346B (en) Remote sensing image semantic segmentation method based on parallax information
CN111783551A (en) Confrontation sample defense method based on Bayes convolutional neural network
WO2022166797A1 (en) Image generation model training method, generation method, apparatus, and device
CN112686249B (en) Grad-CAM attack method based on anti-patch
CN111192206A (en) Method for improving image definition
Wang et al. Fast smoothing technique with edge preservation for single image dehazing
CN112418032A (en) Human behavior recognition method and device, electronic equipment and storage medium
CN114240951B (en) Black box attack method of medical image segmentation neural network based on query
Zhang et al. Single image dehazing based on bright channel prior model and saliency analysis strategy
CN113935496A (en) Robustness improvement defense method for integrated model
CN112560034B (en) Malicious code sample synthesis method and device based on feedback type deep countermeasure network
CN111242839A (en) Image scaling and cutting method based on scale grade
CN116597146A (en) Semantic segmentation method for laser radar sparse point cloud data
CN115510986A (en) Countermeasure sample generation method based on AdvGAN
CN115294381A (en) Small sample image classification method and device based on feature migration and orthogonal prior
Timchenko et al. Processing laser beam spot images using the parallel-hierarchical network for classification and forecasting their energy center coordinates
CN112529047A (en) Countermeasure sample generation method based on gradient shielding
CN114372537B (en) Image description system-oriented universal countermeasure patch generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant