CN113469329B - Method for generating confrontation patch without sample data - Google Patents

Method for generating confrontation patch without sample data Download PDF

Info

Publication number
CN113469329B
CN113469329B CN202110708530.1A CN202110708530A CN113469329B CN 113469329 B CN113469329 B CN 113469329B CN 202110708530 A CN202110708530 A CN 202110708530A CN 113469329 B CN113469329 B CN 113469329B
Authority
CN
China
Prior art keywords
patch
attack
sample
confrontation
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110708530.1A
Other languages
Chinese (zh)
Other versions
CN113469329A (en
Inventor
周星宇
俞璐
郑翔
武欣嵘
潘志松
段晔鑫
张武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Army Engineering University of PLA
Original Assignee
Army Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Army Engineering University of PLA filed Critical Army Engineering University of PLA
Priority to CN202110708530.1A priority Critical patent/CN113469329B/en
Publication of CN113469329A publication Critical patent/CN113469329A/en
Application granted granted Critical
Publication of CN113469329B publication Critical patent/CN113469329B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

A method for generating a confrontation patch without sample data relates to the technical field of deep neural networks. Placing the generated attack patch near the attacked sample, and identifying the confrontation sample as other objects by the misleading classification model; under the condition of lacking of training data, taking a picture with all RGB pixel values of 0 as a background, and generating a non-target countermeasure patch by deceiving characteristic information learned by each layer of the deep neural network; placing the generated attack patch near the attacked sample, and identifying the confrontation sample as a specified category by the misleading classification model; and constructing a background picture, randomly deforming the confrontation patch, placing the confrontation patch at a random position of the background picture for training, and generating a target attack patch by using the information hidden by the target-free attack patch. According to the method, an attacker can generate the confrontation patch without using any training data, and the image classification model constructed based on the deep neural network is attacked.

Description

Countersurface patch generation method without sample data
Technical Field
The invention relates to the technical field of deep neural networks, in particular to a method for generating a confrontation patch without sample data.
Background
The deep neural network achieves breakthrough success in many computer vision tasks such as image classification, target detection, semantic segmentation and the like. However, researches show that the deep neural network is easily attacked by anti-disturbance, and an attacker only needs to add some specific pixel disturbances in a normal image sample to cause the deep neural network model to make wrong judgment.
By limiting the counterdisturbance to a local area, a printer can be used to print such counterdisturbance to a pair of patches to perform an attack. The countermeasure patch can be applied to a traffic sign, so that misjudgment occurs in an automatically driven vehicle; the monitoring system can also be attached to clothes, so that an attacker can hide in front of the monitoring lens; or placed near the goods to prevent malicious crawling by the web crawler. Meanwhile, research on the countermeasure patch is beneficial to improving the defense capability of the deep neural network against malicious attacks.
As shown in fig. 1, in order to enhance the attack capability of the countermeasure tile in the physical scene, the countermeasure tile can attack the model when being placed at different positions of different input pictures, and has certain robustness to angle rotation and scale scaling.
Existing methods of generating countermeasure patches (e.g., google ap, laVAN, PS-GAN, etc.) require a large amount of training data, however, training data for attacked models is often difficult to obtain. For example, the automatic driving company never tells the public the data they use to train the detector, the data for training the classifier on the online shopping website is confidential, and the face verification device will even double the care of storing the face data in the system. Absent training data, it is difficult for current data-driven attack methods to generate attack patches.
Chinese patent CN112085069A multi-target anti-patch generation method and device based on integrated attention mechanism locates key classification area of input image by integrated attention mechanism to ensure that anti-patch plays better attack performance and mobility; the input of the generator fully utilizes the original image information, so that the anti-patch effect generated by the generator is better; the input of the generator is also fused with multi-target category information, and any specified category of the target model can be attacked, so that the attack of the multi-target category is realized; and the input of the discriminator is cut to ensure that the discriminator learns more context information and improve the visual effect of the anti-patch.
The Chinese patent CN112241790A small-sized countermeasure patch generation method and device randomly initializes the countermeasure patch image, adds the initialized countermeasure patch image to a selected pasting area on a target object in training data, and makes a countermeasure sample; the countermeasure samples are input into a deep learning model for countermeasure feature extraction, and benign samples without added countermeasure patch images are input into the deep learning model for benign feature extraction; inputting the confrontation characteristic and the benign characteristic into a characteristic enhancement loss function together for loss calculation to obtain a loss result; adding the loss result into a model loss function, and updating the pixel value of the countermeasure patch through an optimizer after back propagation; after the iteration of the preset times, the counterpatch enables the deep learning model to output an error result, and the counterpatch processing process is finished. The invention can make the size of the confrontation patch in the physical world smaller, reduce the manufacturing cost, reduce the identifiability of the confrontation patch and more easily break through the defense method based on detection.
However, the methods proposed above all highly depend on the quality of the sample data, and the generated anti-tiling effect has strong correlation with the known sample data. In the absence of sample data, no countermeasure patch can be generated.
Disclosure of Invention
The invention provides a method for generating a confrontation patch without sample data, an attacker can generate the confrontation patch without using any training data, and attacks an image classification model constructed based on a deep neural network.
A method for generating a countermeasure patch without sample data comprises the following steps:
step 1: no-target countermeasure patch p without sample data nt The generation method comprises the following steps: placing the generated attack patch near the attacked sample, and identifying the confrontation sample as other objects by the misleading classification model; in the absence of training data x, picture I with all 0 RGB pixel values z As background, feature information learned by spoofing layers of a deep neural network to generate a targetless countermeasure patch p nt
Step 2: targeted countermeasure patch p without sample data t The generation method comprises the following steps: the generated attack patch is placed near the attacked sample to mislead the classification modelIdentifying the confrontation sample as a specified category; constructing a background picture, randomly deforming the confrontation patch, placing the confrontation patch at a random position of the background picture for training, and attacking the patch p by using the non-target nt Implicit information generation targeted attack patch p t
Attacks against deep neural networks can be classified as targets-free attacks and targeted attacks depending on whether a target is specified or not. The non-target attack means that the misleading deep neural network judges the input image sample as the wrong category; the targeted attack refers to misleading the deep neural network to misjudge the input image sample as a specified category. In contrast, a non-target attack is suitable for hiding information of a clean sample, and a target attack is more suitable for launching a directional attack on a target model.
The invention provides a method for generating a confrontation patch without sample data, an attacker can generate the confrontation patch without acquiring training data used by an attacked model in advance, and the attacker attacks an image classification model constructed based on a deep neural network. In view of the high level of security of training data in practical commercial systems such as online shopping, autopilot navigation, etc., data-driven countermeasure patch generation methods have heretofore been difficult to implement. The invention aims at the structural characteristics of the classification model to generate a counterpatch and execute attack without sample data.
The method can generate the target-free countermeasure patch and the misleading deep neural network, and can judge the input image sample as the wrong category, and also generate the target countermeasure patch and mislead the deep neural network, and misjudge the input image sample as the appointed category. Different attack patches may be used depending on the particular application.
Drawings
Fig. 1 is a diagram of an example of a counter-patch attack in the background art.
Fig. 2 is a schematic diagram of a process of constructing a background picture.
Fig. 3 is a schematic diagram of a challenge sample construction process.
FIG. 4 is a no-target countermeasure patch p nt And the attack success rate is shown schematically.
FIG. 5 is a no-target countermeasure patch p nt Target diagram (1).
FIG. 6 is a targeted countermeasure patch p t And the attack success rate is shown schematically.
FIG. 7 is a targeted countermeasure patch p t Target diagram (1).
FIG. 8 is an attack effect of a physical scenario.
Detailed Description
A method for generating a countermeasure patch without sample data comprises the following steps:
step 1: no-target countermeasure patch p without sample data nt The generation method comprises the following steps: placing the generated attack patch near the attacked sample, and identifying the confrontation sample as other objects by the misleading classification model; in the absence of training data x, picture I with all 0 RGB pixel values z As background, feature information learned by spoofing layers of a deep neural network to generate a targetless countermeasure patch p nt
Step 2: targeted countermeasure patch p without sample data t The generation method comprises the following steps: placing the generated attack patch near the attacked sample, and identifying the confrontation sample as a specified category by the misleading classification model; constructing a background picture, randomly deforming the confrontation patch, placing the confrontation patch at a random position of the background picture for training, and attacking the patch p by using the target-free nt Implicit information generation targeted attack patch p t
Preferably, the invention places the generated attack patch near the attacked sample, and the misleading classification model identifies the confrontation sample as other objects, and the formalization expression is as follows:
f(A(p nt ,x,l,t))≠f(x),for x~μ (1)
wherein x represents an input image; μ represents the distribution of the input image; f represents a pre-trained deep neural network classification model which outputs an estimation label f (x) for each image x; a (p) nt X, l, t) indicates that no target confrontation patch p will be present nt After a deformation t (e.g. rotation or scaling) is performed, it is placed at the position/of the image x.
In the absence of training data xPicture I with all 0 RGB pixel values z As background, feature information learned by spoofing layers of a deep neural network to generate a no-target countermeasure patch p nt The formalization is expressed as:
Figure BDA0003130292020000061
wherein L represents a position distribution of the countermeasure patch in the background image, and T represents a deformation distribution of the patch; gamma-shaped i (A(p nt X, l, t)) is the input image a (p) of the deep neural network model f nt ,I z L, t), output at the ith layer; k is the number of layers contained in model f; equation (2) generates a targetless countermeasure patch p by maximizing the outputs of layers of the pre-trained model f without using the training data x to produce an effect similar to over-activation of neurons nt . At each round of iterative optimization, there is no target countermeasure patch p nt Are randomly deformed t (angle rotation and scale scaling) and then randomly placed on a background picture I z At position l, thereby increasing the no-target countermeasure patch p nt The attack robustness of the method can be improved, and the attack effect can be improved.
Preferably, the invention places the generated attack patch near the attacked sample, and the misleading classification model identifies the confrontation sample as a specified category, which is formally expressed as:
Figure BDA0003130292020000062
wherein the content of the first and second substances,
Figure BDA0003130292020000063
representing a specified attack target class; x represents an input image; μ represents the distribution of the input image; f represents a pre-trained deep neural network classification model which outputs an estimation label f (x) for each image x; a (p) t X, l, t) indicates that there will be a targeted countermeasure patch p t After the deformation t is performed, it is placed at the position l of the image x. The invention is in the generationThe patch is combated without any information on the training sample, so that the target class ^ is not extracted from the training sample as in the prior art (GoogleAP, laVAN, etc.)>
Figure BDA0003130292020000064
The information of (1).
Preferably, the target-free attack patch p generated by the present invention nt The deep neural network can be attacked efficiently and recognized as a circular-like object by the classification model, which means that the target-free attack patch p nt May contain information for certain training samples. Assisting in over-activating each layer of the deep neural network by using the hidden prior information to generate a targeted countermeasure patch p t The formalization is expressed as:
Figure BDA0003130292020000071
wherein, I nt =A(p nt ,I z L, t), i.e. using A (p) nt ,I z L, t) as background picture I nt For training generation of targeted countermeasure patches p t (ii) a Attaching no-target attack to a patch p nt Randomly scaled and rotated, and then placed in picture I z At random locations.
As shown in FIG. 2, a background picture I is constructed nt Then there will be a target countermeasure patch p t Randomly deforming and placing the picture in a background picture I nt Is trained on random positions to exploit the target-free attack patch p nt Implicit information generation targeted attack patch p t
1. Targetless anti-patch attack experiment
The method comprises the following steps of adopting 5 pre-trained ImageNet image classification models as attacked deep neural networks for experiments, wherein the models are respectively increment-V3, resNet-50, xception, VGG-16 and VGG-19, and the models can achieve recognition accuracy of over 75% on an ImageNet data set.
Experiments patch attack performance was tested from 3 angles: (1) White boxSingle Model Attack (Whitebox-Single Model attach), i.e. generating a challenge patch p against a Single known Model nt Attacking a single target model; (2) White-box Integrated model Attack (Whitebox-Ensemble Attack), i.e., jointly trained using the above 5 models to generate a challenge patch p nt Attacking 5 target models; (3) Black box Attack (Black box Attack) adopts Leave-One-Out method to generate a countermeasure patch p by combining training of 4 models nt Test attack 5 th model.
The black box attack method is very similar to real-world attacks, because in actual attack, an attacker usually cannot acquire the structure, parameters, training data and other information of an attacked model. In the training process, optimization is carried out by using a gradient descent method, and a hyper-parameter Ite is set max =800,η=8。
To verify the effect of resisting the patch attack, 1000 pictures are randomly selected from the ImageNet verification set for testing, and the 1000 pictures can be correctly classified by the 5 models.
For ease of testing, 13 scaling parameters were set, 1%, 3.25%, 5.5%, 7.75%, 10%, 15%, 20%, 25%, 30%, 35%, 40%, 45%, and 50%, respectively. The patch angular rotation parameters are defined as [ -45 °,45 ° ].
Fig. 3 shows the construction process of the challenge sample for testing at a scaling parameter of 10%. And (3) using a countermeasure patch generated by a white-box single model aiming at the inclusion-V3 model under the attack condition, and placing the patch at a random position of a clean picture after the patch rotates at a random angle.
FIG. 4 illustrates a targetless countermeasure patch p nt The attack success rate. Each point was calculated by 5000 tests (1000 test pictures x 5 pre-trained models). When p is nt When the area of the input picture is 10 percent, the attack success rate of 3 attack modes exceeds 70 percent.
FIG. 5 shows a partially untargeted countermeasure patch p nt Wherein (a) is used for white-box single-model attack, (b) is used for white-box integration attack, and (c) is used for black-box attack, and the target models are VGG-16. As can be seen from the figure, it is not for purposeLabel facing patch p nt Each contains a large number of circular patterns and exhibits some symmetry. When the scaling parameter was 10%, (a) 97.1% of the constructed challenge samples were identified as foam (bubbels), (b) 60.9% of the constructed challenge samples were identified as ladybug, (c) 55.8% of the constructed challenge samples were identified as windmill (pinwheel). This indicates that the present invention is more inclined to generate a targetless countermeasure patch containing a circular pattern to enhance the patch's robustness to deformation, which also results in the corresponding countermeasure sample being more easily recognized as a circular-like object.
2. Targeted anti-patch attack experiment
The model, the test image data set and the hyper-parameter setting adopted by the experiment are consistent with the experiment of the non-target anti-patch attack. FIG. 6 shows a targeted countermeasure patch p t The attack success rate of (1) is that the target class is toaster (toaster), diAP is the abbreviation of the invention, googlAP is the existing comparison technology, diAP _ IZ represents that picture I is used z The target countermeasure patch was generated as a background, diap _ In indicates that the target countermeasure patch was generated using Gaussian noise as a background, and toaster indicates that the test was performed using an actual toaster photo. Fig. 6a shows that in the white-box single-model attack, google ap has slightly higher attack success rate than DiAP. But when the patch covers 10% of the area of the background picture, the attack success rate of the DiAP can still approach 80%. Fig. 6b shows that in the white-box integration attack, the success rate of the DiAP and google ap attacks is very close. Fig. 6c is that the effect of DiAP is slightly better than google ap under black box attack conditions. The method does not need any training sample data, and only uses the deep neural network model to drive and generate the confrontation patch, which cannot be realized by GoogleAP. Moreover, the attack success rate of the DiAP is far superior to that of the DiAP _ Iz, the DiAP _ In and a real toaster photo. This shows that the attack patch generated by the method is likely to learn the high-level characteristic information of the target class, so that the patch is more similar to the toaster than the real picture of the toaster in the 'vision' of the deep neural network.
FIG. 7 illustrates the partially targeted countermeasure patch p generated by the present invention t Wherein (a) is used for white-box single-model attack and (b) is used for whiteThe box integration attack, (c) is used for the black box attack, and the target models are all VGG-16. Although the present invention is model driven, without using any training sample data, it can be seen from the figure that these patches have some similarity to the true object class (toaster).
Fig. 8 shows the effect of physical attack using the patch shown in fig. 7 (b). Firstly, a Canon IP8780 printer is used for printing a patch into a real object, then the patch is placed near an attacked object, then a millet 8 mobile phone is used for shooting, and finally a pre-trained inclusion-V3 model is used for classifying shot images. As can be seen, objects that could have been properly classified, after placement of the countermeasure patch, are mistakenly identified as toasters.

Claims (5)

1. A method for generating a confrontation patch without sample data is characterized by comprising the following steps:
step 1: no-target countermeasure patch p without sample data nt The generation method comprises the following steps: placing the generated attack patch near the attacked sample, and identifying the confrontation sample as other objects by the misleading classification model; in the absence of training data, picture I with all 0 RGB pixel values z As background, feature information learned by spoofing layers of a deep neural network to generate a targetless countermeasure patch p nt
Step 2: targeted countermeasure patch p without sample data t The generation method comprises the following steps: placing the generated attack patch near the attacked sample, and identifying the confrontation sample as a specified category by the misleading classification model; constructing a background picture, randomly deforming the confrontation patch, placing the confrontation patch at a random position of the background picture for training, and attacking the patch p by using the target-free nt Implicit information generation targeted attack patch p t
2. The method of claim 1, wherein the misleading classification model in step 1 identifies the challenge sample as another object, and the formalization is as follows:
f(A(p nt ,x,l,t))≠f(x),for x~μ (1)
wherein x represents an input image; μ represents the distribution of the input image; f represents a pre-trained deep neural network classification model which outputs an estimation label f (x) for each input image x; a (p) nt X, l, t) indicates that no target confrontation patch p will be present nt After the deformation t is performed, it is placed at the position l of the input image x.
3. The method of claim 2, wherein in the absence of training data, picture I is taken with RGB pixel values all 0 z As background, feature information learned by spoofing layers of a deep neural network to generate a no-target countermeasure patch p nt The formalization is expressed as:
Figure FDA0003927232940000021
wherein L represents a position distribution of the countermeasure patch in the background image, and T represents a deformation distribution of the patch; gamma-shaped i (A(p nt X, l, t)) is the input image a (p) of the deep neural network model f nt ,I z L, t), output at the ith layer; k is the number of layers contained in model f;
equation (2) generates the targetless countermeasure patch p by maximizing the outputs of layers of the pre-trained model f without using training data nt
4. The method of claim 3, wherein the misleading classification model in step 2 identifies the challenge sample as a specified class expressed formally as:
Figure FDA0003927232940000022
wherein the content of the first and second substances,
Figure FDA0003927232940000023
representing a specified attack target class; x represents an input image; μ represents the distribution of the input image; f represents a pre-trained deep neural network classification model which outputs an estimation label f (x) for each image x; a (p) t X, l, t) indicates that there will be a targeted countermeasure patch p t After the deformation t is performed, it is placed at the position l of the image x.
5. The method of claim 4, wherein the targeted countermeasure patch p is generated t The formalization is expressed as:
Figure FDA0003927232940000024
wherein, I nt =A(p nt ,I z L, t), i.e. using A (p) nt ,I z L, t) as background picture I nt For training generation of a targetless attack patch p nt (ii) a Attaching no-target attack to a patch p nt Randomly scaled and rotated, and then placed in picture I z At random locations.
CN202110708530.1A 2021-06-24 2021-06-24 Method for generating confrontation patch without sample data Active CN113469329B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110708530.1A CN113469329B (en) 2021-06-24 2021-06-24 Method for generating confrontation patch without sample data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110708530.1A CN113469329B (en) 2021-06-24 2021-06-24 Method for generating confrontation patch without sample data

Publications (2)

Publication Number Publication Date
CN113469329A CN113469329A (en) 2021-10-01
CN113469329B true CN113469329B (en) 2023-03-24

Family

ID=77872943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110708530.1A Active CN113469329B (en) 2021-06-24 2021-06-24 Method for generating confrontation patch without sample data

Country Status (1)

Country Link
CN (1) CN113469329B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114266344A (en) * 2022-01-06 2022-04-01 北京墨云科技有限公司 Method and apparatus for neural network vision recognition system using anti-patch attack

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447212A (en) * 2020-03-24 2020-07-24 哈尔滨工程大学 Method for generating and detecting APT (advanced persistent threat) attack sequence based on GAN (generic antigen network)
CN111461307A (en) * 2020-04-02 2020-07-28 武汉大学 General disturbance generation method based on generation countermeasure network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11768932B2 (en) * 2019-06-28 2023-09-26 Baidu Usa Llc Systems and methods for fast training of more robust models against adversarial attacks
CN111275115B (en) * 2020-01-20 2022-02-22 星汉智能科技股份有限公司 Method for generating counterattack sample based on generation counternetwork
US10783401B1 (en) * 2020-02-23 2020-09-22 Fudan University Black-box adversarial attacks on videos
CN112149609A (en) * 2020-10-09 2020-12-29 中国人民解放军空军工程大学 Black box anti-sample attack method for electric energy quality signal neural network classification model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447212A (en) * 2020-03-24 2020-07-24 哈尔滨工程大学 Method for generating and detecting APT (advanced persistent threat) attack sequence based on GAN (generic antigen network)
CN111461307A (en) * 2020-04-02 2020-07-28 武汉大学 General disturbance generation method based on generation countermeasure network

Also Published As

Publication number Publication date
CN113469329A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
Athalye et al. Synthesizing robust adversarial examples
Huang et al. Universal physical camouflage attacks on object detectors
Zhong et al. Backdoor embedding in convolutional neural network models via invisible perturbation
Brown et al. Adversarial patch
Chen et al. Amplitude-phase recombination: Rethinking robustness of convolutional neural networks in frequency domain
Quiring et al. Backdooring and poisoning neural networks with image-scaling attacks
CN108491837B (en) Anti-attack method for improving license plate attack robustness
Zhang et al. Adversarial examples for replay attacks against CNN-based face recognition with anti-spoofing capability
Pautov et al. On adversarial patches: real-world attack on arcface-100 face recognition system
Xiong et al. Multi-source adversarial sample attack on autonomous vehicles
CN111626925A (en) Method and device for generating counterwork patch
Xue et al. Robust backdoor attacks against deep neural networks in real physical world
Sharma et al. Adversarial patch attacks and defences in vision-based tasks: A survey
CN113469329B (en) Method for generating confrontation patch without sample data
Yang et al. Beyond digital domain: Fooling deep learning based recognition system in physical world
Lian et al. CBA: Contextual background attack against optical aerial detection in the physical world
Zhou et al. A data independent approach to generate adversarial patches
Gittings et al. Vax-a-net: Training-time defence against adversarial patch attacks
Lapid et al. Patch of invisibility: Naturalistic black-box adversarial attacks on object detectors
CN114067176A (en) Countersurface patch generation method without sample data
CN113435264A (en) Face recognition attack resisting method and device based on black box substitution model searching
Xiang et al. Revealing perceptible backdoors, without the training set, via the maximum achievable misclassification fraction statistic
Ye et al. Patch-based attack on traffic sign recognition
Liang et al. Poisoned forgery face: Towards backdoor attacks on face forgery detection
Khuspe et al. Robust image forgery localization and recognition in copy-move using bag of features and SVM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant