CN114399630A - Countercheck sample generation method based on belief attack and significant area disturbance limitation - Google Patents

Countercheck sample generation method based on belief attack and significant area disturbance limitation Download PDF

Info

Publication number
CN114399630A
CN114399630A CN202111655287.8A CN202111655287A CN114399630A CN 114399630 A CN114399630 A CN 114399630A CN 202111655287 A CN202111655287 A CN 202111655287A CN 114399630 A CN114399630 A CN 114399630A
Authority
CN
China
Prior art keywords
disturbance
attack
belief
generating
countermeasure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111655287.8A
Other languages
Chinese (zh)
Inventor
张世辉
左东旭
杨永亮
张晓微
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN202111655287.8A priority Critical patent/CN114399630A/en
Publication of CN114399630A publication Critical patent/CN114399630A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention provides a method for generating an confrontation sample based on belief attack and significant area disturbance limitation, which relates to the technical field of deep neural networks and the field of computer vision, and comprises the following steps: providing an original image and a white-box target model, using a data set containing the original image as the data set, generating a binary mask of a salient region of the original image by utilizing a class activation mapping technology, and generating global countermeasure disturbance by utilizing a belief-based attack method to fuse an iterative fast gradient method; fusing the generated global countermeasure disturbance and the significant region binary mask to generate significant region countermeasure disturbance; adding the salient region countermeasure disturbance to the input image, iteratively updating until a preset termination condition is reached, and outputting the image countermeasure sample of the last iteration as the generated countermeasure sample. The challenge sample generated by the method has low disturbance and high mobility.

Description

Countercheck sample generation method based on belief attack and significant area disturbance limitation
Technical Field
The invention relates to the technical field of deep neural networks and the field of computer vision, in particular to a countercheck sample generation method based on belief attack and significant area disturbance limitation.
Background
The wide application of deep neural networks has led to a tremendous performance increase for many computer vision tasks, such as image classification, target detection, image segmentation, etc. However, recent studies have found that Deep Neural Networks (DNNs) are susceptible to spoofing by artificially carefully designed challenge samples. In 2014, szegdy et al first discovered the existence of a challenge sample in the field of image classification, i.e., adding a small perturbation to the original image would cause a misdiscrimination of the DNN model. More seriously, these small perturbations are also very subtle and not easily perceived by the human visual system. This finding reveals that the DNN model presents a number of security issues that, if left unsolved, pose an uncontrollable security risk. Therefore, generating a countermeasure sample with high aggressiveness can not only detect the vulnerability of the DNN model, but also improve the robustness of the DNN model.
Existing challenge sample generation methods can be divided into two categories: a white box attack method and a black box attack method. The white-box attack method assumes that all information of the target network is grasped and the countermeasures are generated using the information. In contrast, the black box attack method can generate an offensive countermeasure sample without grasping any information of the target network. Since most of the model information in the real world is difficult to obtain, the black box attack is more applicable. Existing black box attack models can be further divided into two categories: a query-based attack method and a migration-based attack method. The query-based approach queries the target model using some input images and then uses the queried information to generate a black-box model that counters sample attacks. Such methods can be further classified into a prediction score-based query method and a prediction tag-based query method according to query information. For the predictive score-Based approach, Chen et al, in article "Zoo: zero Order Optimization Based Black-box targets to Deep Neural Networks with out Training sub static Models [ C ]// ACM 10th ACM works on Intelligent interest and Security, Dallas, TX, United states, pages 15-26,2017" propose a Black-box challenge sample generation method of zero Order Optimization Based Black-box targets (ZOO) that utilizes a gradient estimation method instead of gradient information of the Black-box target model Based on the predictive score to generate challenge samples. For predictive tag-based query methods, Brendel et al, in the article "Decision-based adaptive models". Reliable attacks against missing black-box machine Learning models [ C ]// International Conference on Learning retrieval, Vancouver, BC, Canada,2018, "propose a boundary attack method that randomly samples candidate perturbations of an anti-sample from a particular distribution, and then selects the perturbation that minimizes the loss function as the anti-perturbation. Although the query-based attack method can achieve a better black box attack effect, the method inevitably introduces high computational overhead due to the requirement of a large number of query operations. Migration-based methods, which aim to promote the mobility of the countersamples generated by the white-box model to attack the target model, can also be divided into two categories: surrogate model-based attack methods and gradient-based attack methods. The surrogate model-based attack method aims at training a white-box surrogate model to generate a countersample. Zhou et al, in the article "Datt: Data-free customization training for adaptive attacks [ C ]// IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, United states, pages 234-. Surrogate model-based approaches also require a large number of query operations. The gradient-based attack method is intended to improve the mobility of white-box models to generate challenge samples, and Kurakin et al propose the I-FGSM method in the article "adaptive experiments in the physical world [ C ]// International Conference on Learning responses, Caribe Hilton, San Juan, Puerto Rico, pages 99-112,2016, which generates challenge samples by multiple iterations. Although the gradient-based attack method does not require a large number of query operations, the following problems still exist: first, gradient-based methods typically ignore semantic information in the input image that plays an important role in DNN model decision-making, so the challenge samples generated by these methods are saturated with perceptible perturbations, which results in poor visual impact; secondly, the challenge samples generated by the gradient-based method are less aggressive on the black box target model and are particularly obvious on some defense models.
Similar to the process of training the neural network model, the process of generating the challenge samples can also be considered as an optimization problem. In particular, generating countermeasure samples under a white-box model can be considered a process of training a neural network model. In contrast, using the challenge sample to attack the black box model can be viewed as a process of validating the neural network model. Therefore, an optimization method with good generalization can effectively improve the mobility of the antagonistic sample. Zhuang et al, in the article "AdaBelief Optimizer: adaptive steps by the Belief in underlying gradient [ C ]// Neural Information Processing systems.2020,33:18795 18806", propose an optimization method based on Belief's adaptivity, which integrates the advantages of Stochastic Gradient Descent (SGD) and Adam algorithm into a whole, and has both good generalization of SGD algorithm and fast convergence and stability of Adam algorithm. The belief idea is applied to the field of generation of countermeasure samples, so that the mobility of the countermeasure samples can be effectively improved.
Based on the above analysis, the existing method for generating the challenge sample has the problems of perceptible disturbance, low mobility and the like. Therefore, the invention provides a countermeasure sample generation method based on belief attack and significant area disturbance limitation to generate a low-disturbance and high-mobility countermeasure sample aiming at the problems of the existing method, and the countermeasure sample can detect the loophole of the DNN model and serve as an evaluation index of the safety of the DNN model, so that the robustness and the safety of the DNN model are improved.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a countermeasure sample generation method based on belief attack and significant area disturbance limitation, effectively solve the problems of perceptible disturbance and low mobility of the existing countermeasure sample generation method, facilitate detection of a vulnerability of a DNN model, and serve as an evaluation index of the safety of the DNN model, so that the robustness and the safety of the DNN model are improved, and the generated countermeasure sample has low disturbance and high mobility.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method for generating a countercheck sample based on belief attack and significant area disturbance limitation comprises the following steps:
step S1: providing an original image, and taking the original image as training data of a DNN model;
step S2: providing a white-box target model, using a data set containing an original image as a training data set, generating a binary mask of a salient region of the original image by using a class activation mapping technology, and generating global countermeasure disturbance by using a belief-based attack method and a fusion I-FGSM countermeasure sample generation method;
step S3: carrying out Hadamard product operation on the generated global countermeasure disturbance and the significant region binary mask to generate the significant region countermeasure disturbance;
step S4: the salient region is added to the input image against the disturbance,
step S5: and repeating the steps S2-S4 to generate image antagonistic samples in an iterative mode, and clipping overflowing pixel values until a preset termination condition is reached, wherein the antagonistic samples generated in the last iteration are used as output antagonistic samples.
The technical scheme of the invention is further improved as follows: in step 1, the raw image is from the ImageNet validity dataset and comprises 1000 different classes of pictures.
The technical scheme of the invention is further improved as follows: in the step 2, the white box model adopts one of the inclusion-v 3, the inclusion-v 4, the inclusion-Resnet-v 2 and the Resnet-v 2-101.
The technical scheme of the invention is further improved as follows: in the step 2, the class activation mapping technology is based on a Grad-CAM method, a class activation mapping map is obtained by calculating neuron importance weights of a white-box target model relative to an input image and performing weighted fusion on the neuron importance weights and the activation feature map, and then a corresponding significant region binary mask is generated;
the process of obtaining the binary mask of the salient region is specifically represented as:
Figure BDA0003448122490000041
Figure BDA0003448122490000042
wherein the content of the first and second substances,
Figure BDA0003448122490000043
for the k-th feature map with respect to the neuron significance weight of class c, ycThe gradient corresponding to the fraction of class c before the softmax layer,
Figure BDA0003448122490000044
representing the pixel value of the k-th feature map at position (i, j), Z being the number of pixels of the input image, H (-) being the salient region selection function, Sf(r, x, y) an input image x labeled y selects a pixel region of scale r on classifier f as a significant binary mask; wherein the ratio r is in the range of [0.1,1]The most prominent region representing the ratio r is selected.
The technical scheme of the invention is further improved as follows: in the step 2, the idea of the belief-based attack method includes: calculating the exponential moving average of the observed gradient by using the observed gradient, and calculating the observed gradient by using the following formula:
Figure BDA0003448122490000051
wherein J is a loss function of the classifier, | | · |. non-woven phosphor1Is L1A norm;
EMA for calculating the observed gradient and the square of the observed gradient based on the observed gradient is calculated using the formula:
mt=β1mt-1+(1-β1)gt
st=β2st-1+(1-β2)(gt-mt)2
wherein m istObserve the gradient g for the t-th iterationtEMA, s oftTo observe EMA, beta of the gradient squared10.99 and beta20.999 is a smoothing parameter;
based on the EMA of the observed gradient and the observed gradient squared, the opposition perturbation is calculated using the following formula:
Figure BDA0003448122490000052
wherein δ is 1 e-8;
and performing deviation correction and regularization and scaling operation on the confrontation disturbance to obtain a global confrontation disturbance, and calculating by the following formula:
Figure BDA0003448122490000053
Figure BDA0003448122490000054
Figure BDA0003448122490000055
wherein alpha is initial step length, epsilon is disturbance quantity, N is input image pixel number, T is total iteration number, and lambda istIn order to correct the deviation, the method comprises the following steps,
Figure BDA0003448122490000056
the global countermeasure disturbance of the t iteration is achieved, and eta is a disturbance control factor; the disturbance amount epsilon is 16, and the generated anti-disturbance range is [ -16,16 [ -16 [ ]]。
The technical scheme of the invention is further improved as follows: in step 3, the significant region opposition disturbance calculation formula is as follows:
Figure BDA0003448122490000061
wherein the content of the first and second substances,
Figure BDA0003448122490000062
the Hadamard product operation is performed, i.e., a matrix bitwise multiplication operation is performed.
The technical scheme of the invention is further improved as follows: in step 5, the calculation formula for adding the disturbance to the input image operation and performing overflow pixel value clipping is as follows:
Figure BDA0003448122490000063
Figure BDA0003448122490000064
wherein, Clip () limits the disturbance quantity within the disturbance epsilon range;
the preset termination conditions include: and the preset iteration time T is 10.
Due to the adoption of the technical scheme, the invention has the technical progress that:
the method effectively solves the problems of perceptible disturbance and low mobility of the existing countermeasure sample generation method, is convenient for detecting the loophole of the DNN model, and is used as an evaluation index of the safety of the DNN model, so that the robustness and the safety of the DNN model are improved, and the generated countermeasure sample has low disturbance and high mobility.
(1) According to the characteristic that different DNN models are similar in the range and the shape of the salient region, the invention provides a global disturbance resisting and limiting method based on disturbance limitation of the salient region, and on the basis of hardly influencing the migration resisting property, the disturbance resisting is greatly reduced, so that the disturbance resisting which is not easily perceived by human eyes is obtained, and the visual quality is greatly improved.
(2) The invention provides a belief-based confrontation sample generation method by utilizing good convergence and generalization of an Adabeeif optimizer in the training of a DNN model, and the method enables the confrontation sample training process to easily escape from a local maximum value, so that the mobility of the confrontation sample is greatly improved.
(3) The method can be naturally and efficiently combined with the existing method for generating the confrontation sample based on the gradient, so that the mobility of the confrontation sample is further improved.
Drawings
Fig. 1 is a schematic diagram of an embodiment of a method for generating a countermeasure sample based on belief attack and significant area perturbation limitation provided by the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples:
as shown in fig. 1, the main content of the present invention is to provide a method for generating a countermeasure sample based on belief attack and significant area perturbation limitation, which can be used to detect a vulnerability of a DNN model as an evaluation index of the security of the DNN model, thereby improving the robustness and security of the DNN model.
In order to make the technical method of the present invention clear, the present invention is further described in detail below with reference to the accompanying drawings and examples.
Step S1: providing an original image, wherein the original image serves as training data for the DNN model.
In this example, the raw images are from the ImageNet validity dataset, from which 1000 different classes of pictures were selected, which were almost all correctly classified by the test model.
Step S2: providing a white-box target model, using a data set containing an original image as a training data set, generating a binary mask of a salient region of the original image by using a class activation mapping technology, and generating global countermeasure disturbance by using a belief-based attack method and a fusion I-FGSM countermeasure sample generation method.
In the embodiment, the inclusion-v 3 is adopted as the white-box target model, and other models such as inclusion-v 4, inclusion-Resnet-v 2, Resnet-v2-101 and the like can be used as the white-box model. As shown in fig. 1, the class activation mapping technique is based on the Grad-CAM method, and obtains a class activation map by calculating and weighting and fusing the neuron importance weights of the white-box target model with respect to the input image with the activation feature map, and then generates a corresponding significant region binary mask.
The process of obtaining the significant region binary mask may be specifically expressed as:
Figure BDA0003448122490000071
Figure BDA0003448122490000072
wherein the content of the first and second substances,
Figure BDA0003448122490000073
for the k-th feature map with respect to the neuron significance weight of class c, ycThe gradient corresponding to the fraction of class c before the softmax layer,
Figure BDA0003448122490000074
representing the pixel value of the k-th feature map at position (i, j), Z being the number of pixels of the input image, H (-) being the salient region selection function, Sf(r, x, y) input image x labeled y selects a proportion r of regions of pixels on classifier f as significant binary mask, where the proportion r ranges from [0.1,1]The most prominent region representing the ratio r is selected. In the present embodiment, r is selected to be 0.8.
In this embodiment, the idea of the belief-based attack method includes: the observed gradient is used to calculate its Exponential Moving Average (EMA), which is calculated using the following formula:
Figure BDA0003448122490000081
wherein J is a loss function of the classifier, | | · |. non-woven phosphor1Is L1And (4) norm.
EMA for calculating the observed gradient and the square of the observed gradient based on the observed gradient is calculated using the formula:
mt=β1mt-1+(1-β1)gt
st=β2st-1+(1-β2)(gt-mt)2
wherein m istObserve the gradient g for the t-th iterationtEMA, s oftTo observe EMA, beta of the gradient squared10.99 and beta20.999 is a smoothing parameter.
Based on the EMA of the observed gradient and the observed gradient squared, the opposition perturbation is calculated using the following formula:
Figure BDA0003448122490000082
where δ is a small value used for numerical calculation stability, and δ is 1e-8 in the present embodiment.
And carrying out operations such as deviation correction, regularization, scaling and the like on the countermeasure disturbance to obtain global countermeasure disturbance, and calculating by the following formula:
Figure BDA0003448122490000083
Figure BDA0003448122490000084
Figure BDA0003448122490000085
wherein alpha is initial step length, epsilon is disturbance quantity, N is input image pixel number, T is total iteration number, and lambda istIn order to correct the deviation, the method comprises the following steps,
Figure BDA0003448122490000086
and eta is a disturbance control factor for the global counterdisturbance of the t-th iteration. In the present embodiment, the disturbance amount ∈ 16 indicates that the generated countermeasure disturbance range is [ -16,16 [ ]]。
Step S3: and carrying out Hadamard product (Hadamard product) operation on the generated global anti-disturbance and the significant region binary mask to generate the significant region anti-disturbance.
In this embodiment, the significant region opposition disturbance calculation formula is as follows:
Figure BDA0003448122490000091
wherein the content of the first and second substances,
Figure BDA0003448122490000092
the Hadamard product operation is performed, i.e., a matrix bitwise multiplication operation is performed.
Step S4: the salient region is added to the input image against the disturbance,
step S5: and repeating the steps S2-S4 to generate image antagonistic samples in an iterative mode, and clipping overflowing pixel values until a preset termination condition is reached, wherein the antagonistic samples generated in the last iteration are used as output antagonistic samples.
In this embodiment, the calculation formula for adding a perturbation to the input image operation and performing overflow pixel value clipping is as follows:
Figure BDA0003448122490000093
Figure BDA0003448122490000094
wherein Clip () limits the perturbation amount to within perturbation epsilon.
In step S5, the preset termination condition includes: and the preset iteration time T is 10.

Claims (7)

1. The method for generating the confrontation sample based on belief attack and significant area disturbance limitation is characterized by comprising the following steps of:
step S1: providing an original image, and taking the original image as training data of a DNN model;
step S2: providing a white-box target model, using a data set containing an original image as a training data set, generating a binary mask of a salient region of the original image by using a class activation mapping technology, and generating global countermeasure disturbance by using a belief-based attack method and a fusion I-FGSM countermeasure sample generation method;
step S3: carrying out Hadamard product operation on the generated global countermeasure disturbance and the significant region binary mask to generate the significant region countermeasure disturbance;
step S4: the salient region is added to the input image against the disturbance,
step S5: and repeating the steps S2-S4 to generate image antagonistic samples in an iterative mode, and clipping overflowing pixel values until a preset termination condition is reached, wherein the antagonistic samples generated in the last iteration are used as output antagonistic samples.
2. The method for generating the confrontational sample based on the belief attack and the significant area perturbation limitation as claimed in claim 1, wherein: in step 1, the raw image is from the ImageNet validity dataset and comprises 1000 different classes of pictures.
3. The method for generating the confrontational sample based on the belief attack and the significant area perturbation limitation as claimed in claim 1, wherein: in the step 2, the white box model adopts one of the inclusion-v 3, the inclusion-v 4, the inclusion-Resnet-v 2 and the Resnet-v 2-101.
4. The method for generating the confrontational sample based on the belief attack and the significant area perturbation limitation as claimed in claim 1, wherein: in the step 2, the class activation mapping technology is based on a Grad-CAM method, a class activation mapping map is obtained by calculating neuron importance weights of a white-box target model relative to an input image and performing weighted fusion on the neuron importance weights and the activation feature map, and then a corresponding significant region binary mask is generated;
the process of obtaining the binary mask of the salient region is specifically represented as:
Figure FDA0003448122480000011
Figure FDA0003448122480000012
wherein the content of the first and second substances,
Figure FDA0003448122480000021
for the k-th feature map with respect to the neuron significance weight of class c, ycThe gradient corresponding to the fraction of class c before the softmax layer,
Figure FDA0003448122480000022
representing the pixel value of the k-th feature map at position (i, j), Z being the number of pixels of the input image, H (-) being the salient region selection function, Sf(r, x, y) an input image x labeled y selects a pixel region of scale r on classifier f as a significant binary mask; wherein the ratio r is in the range of [0.1,1]The most prominent region representing the ratio r is selected.
5. The method for generating the confrontational sample based on the belief attack and the significant area perturbation limitation as claimed in claim 1, wherein: in the step 2, the idea of the belief-based attack method includes: calculating the exponential moving average of the observed gradient by using the observed gradient, and calculating the observed gradient by using the following formula:
Figure FDA0003448122480000023
wherein J is a loss function of the classifier, | | · |. non-woven phosphor1Is L1A norm;
EMA for calculating the observed gradient and the square of the observed gradient based on the observed gradient is calculated using the formula:
mt=β1mt-1+(1-β1)gt
st=β2st-1+(1-β2)(gt-mt)2
wherein m istObserve the gradient g for the t-th iterationtEMA, s oftTo observe EMA, beta of the gradient squared10.99 and beta20.999 is a smoothing parameter;
based on the EMA of the observed gradient and the observed gradient squared, the opposition perturbation is calculated using the following formula:
Figure FDA0003448122480000024
wherein δ is 1 e-8;
and performing deviation correction and regularization and scaling operation on the confrontation disturbance to obtain a global confrontation disturbance, and calculating by the following formula:
Figure FDA0003448122480000025
Figure FDA0003448122480000026
Figure FDA0003448122480000031
wherein alpha is initial step length, epsilon is disturbance quantity, N is input image pixel number, T is total iteration number, and lambda istIn order to correct the deviation, the method comprises the following steps,
Figure FDA0003448122480000032
the global countermeasure disturbance of the t iteration is achieved, and eta is a disturbance control factor; the disturbance amount epsilon is 16, and the generated anti-disturbance range is [ -16,16 [ -16 [ ]]。
6. The method for generating the confrontational sample based on the belief attack and the significant area perturbation limitation as claimed in claim 1, wherein: in step 3, the significant region opposition disturbance calculation formula is as follows:
Figure FDA0003448122480000033
wherein the content of the first and second substances,
Figure FDA0003448122480000034
the Hadamard product operation is performed, i.e., a matrix bitwise multiplication operation is performed.
7. The method for generating the confrontational sample based on the belief attack and the significant area perturbation limitation as claimed in claim 1, wherein: in step 5, the calculation formula for adding the disturbance to the input image operation and performing overflow pixel value clipping is as follows:
Figure FDA0003448122480000035
Figure FDA0003448122480000036
wherein, Clip () limits the disturbance quantity within the disturbance epsilon range;
the preset termination conditions include: and the preset iteration time T is 10.
CN202111655287.8A 2021-12-31 2021-12-31 Countercheck sample generation method based on belief attack and significant area disturbance limitation Pending CN114399630A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111655287.8A CN114399630A (en) 2021-12-31 2021-12-31 Countercheck sample generation method based on belief attack and significant area disturbance limitation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111655287.8A CN114399630A (en) 2021-12-31 2021-12-31 Countercheck sample generation method based on belief attack and significant area disturbance limitation

Publications (1)

Publication Number Publication Date
CN114399630A true CN114399630A (en) 2022-04-26

Family

ID=81228706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111655287.8A Pending CN114399630A (en) 2021-12-31 2021-12-31 Countercheck sample generation method based on belief attack and significant area disturbance limitation

Country Status (1)

Country Link
CN (1) CN114399630A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100421A (en) * 2022-06-22 2022-09-23 西北工业大学 Confrontation sample generation method based on image frequency domain decomposition and reconstruction
CN115439377A (en) * 2022-11-08 2022-12-06 电子科技大学 Method for enhancing resistance to image sample migration attack

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100421A (en) * 2022-06-22 2022-09-23 西北工业大学 Confrontation sample generation method based on image frequency domain decomposition and reconstruction
CN115100421B (en) * 2022-06-22 2024-03-12 西北工业大学 Antagonistic sample generation method based on image frequency domain decomposition reconstruction
CN115439377A (en) * 2022-11-08 2022-12-06 电子科技大学 Method for enhancing resistance to image sample migration attack
CN115439377B (en) * 2022-11-08 2023-03-24 电子科技大学 Method for enhancing resistance to image sample migration attack

Similar Documents

Publication Publication Date Title
CN111627044B (en) Target tracking attack and defense method based on deep network
CN111709435B (en) Discrete wavelet transform-based countermeasure sample generation method
CN111080675B (en) Target tracking method based on space-time constraint correlation filtering
Chen et al. Local patch network with global attention for infrared small target detection
CN111325324A (en) Deep learning confrontation sample generation method based on second-order method
CN110853074B (en) Video target detection network system for enhancing targets by utilizing optical flow
CN114399630A (en) Countercheck sample generation method based on belief attack and significant area disturbance limitation
CN113822328B (en) Image classification method for defending against sample attack, terminal device and storage medium
CN113674140B (en) Physical countermeasure sample generation method and system
CN111783551A (en) Confrontation sample defense method based on Bayes convolutional neural network
CN113627543B (en) Anti-attack detection method
CN113704758A (en) Black box attack counterattack sample generation method and system
CN116051924B (en) Divide-and-conquer defense method for image countermeasure sample
CN111325221A (en) Image feature extraction method based on image depth information
CN115510986A (en) Countermeasure sample generation method based on AdvGAN
CN114638356A (en) Static weight guided deep neural network back door detection method and system
CN113487506A (en) Countermeasure sample defense method, device and system based on attention denoising
CN111797732A (en) Video motion identification anti-attack method insensitive to sampling
CN111191717A (en) Black box confrontation sample generation algorithm based on hidden space clustering
Yin A comparative study on the method of extracting edge and contour information of multifunctional digital ship image
Zhang et al. An efficient general black-box adversarial attack approach based on multi-objective optimization for high dimensional images
CN111080727B (en) Color image reconstruction method and device and image classification method and device
CN113486736B (en) Black box anti-attack method based on active subspace and low-rank evolution strategy
CN116486235A (en) Method and system for generating countermeasure sample based on sparse matrix and penalty term
CN116958123A (en) Image defense model robustness detection method combined with fusion attack

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination