CN114330652A - Target detection attack method and device - Google Patents

Target detection attack method and device Download PDF

Info

Publication number
CN114330652A
CN114330652A CN202111580489.0A CN202111580489A CN114330652A CN 114330652 A CN114330652 A CN 114330652A CN 202111580489 A CN202111580489 A CN 202111580489A CN 114330652 A CN114330652 A CN 114330652A
Authority
CN
China
Prior art keywords
loss
picture
feature
generator
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111580489.0A
Other languages
Chinese (zh)
Inventor
孙军梅
袁珑
李秀梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Normal University
Original Assignee
Hangzhou Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Normal University filed Critical Hangzhou Normal University
Priority to CN202111580489.0A priority Critical patent/CN114330652A/en
Publication of CN114330652A publication Critical patent/CN114330652A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a target detection attack method and device. By converting the countermeasure sample from the traditional optimization mechanism into the generation mechanism, the time required for generating the countermeasure sample is greatly shortened, and meanwhile, the interior of the model does not need to be accessed after the GAN network is trained, so that black box attack can be effectively carried out. The method can effectively guide the training of the network by effectively utilizing the classification and position regression loss output by the target model, and meanwhile, the loss of the feature layer is introduced, so that the characteristics of the image, which are relatively sensitive to the network, in a high-dimensional space can be effectively captured, and the attack success rate can be further improved by disturbing the image. In addition, the added Gaussian filtering module can remove the high-dimensional disturbance of the confrontation sample, leave the low-dimensional disturbance, improve the quality of the picture of the generated confrontation sample and further enhance the attack success rate.

Description

Target detection attack method and device
Technical Field
The invention belongs to the field of deep learning counterattack, and particularly relates to a target detection attack method and device.
Background
With the development of software and hardware technologies, deep learning technologies represented by convolutional neural networks are widely applied to many computer vision tasks, such as image classification, target detection, semantic segmentation, scene text recognition, and the like. Despite the great success of deep learning on these tasks, recent studies have shown that neural networks are vulnerable to challenge samples. Szegydy et al for the first time found that by adding small perturbations to the original sample that are hard to perceive by humans, the resulting pictures made this way would make the neural network unable to correctly classify the pictures, and he called "confrontational samples" the pictures added with a certain perturbation. The countermeasure sample relates to the problem of deep learning security, and is concerned by many scholars at home and abroad, and many counterattack methods aiming at image classification are proposed, such as FGSM, Deepfol, C & W, MI-FGSM, and the like. Nowadays, the countersample is not only presented to the image classification task, but also other visual tasks begin to be attacked by the countersample, and the target detection is one of them.
The target detection is one of the most core tasks in the computer vision field, and is closely related to many other tasks, such as target tracking, semantic segmentation and the like, and the tasks are realized based on the target detection technology. Meanwhile, target detection is widely applied to safety-critical scenes such as industrial control and aerospace, and therefore, the method is particularly important for researching the safety of target detection. In 2017, Lu et al added disturbances to the stop flag and face pictures for the first time to mislead the Faster R-CNN detector and successfully deceive the detector, and then developed a series of studies on the target detection field. Xie et al propose DAG methods to attack Faster R-CNN, which assign an error label to all detector extracted proposed regions and then let the detector classify the regions as errors by a gradient optimization strategy. Although the number of attacks for target detection is infinite, most methods inevitably suffer from several problems: 1) generating challenge samples requires a significant amount of time. For example, DPatch generation requires tens or even hundreds of thousands of iterations to generate a valid Patch, and the method proposed by Darren [ herein incorporated ] requires an inference time of hundreds of seconds to generate a challenge sample. 2) Most are white-box attacks, which require knowledge of the parameters of the model. The existing attack method is usually based on a gradient optimization strategy, when a countermeasure sample is generated, parameters of an actual model need to be obtained, and noise is optimized through back propagation to obtain the countermeasure sample. However, in actual attack, an attacker is often faced with a black box model of unknown type, and no internal parameters are known, so that it is particularly important to develop a method for black box attack.
Disclosure of Invention
An object of the present invention is to provide a target detection attack method, which addresses the deficiencies of the prior art.
The method comprises the following specific steps:
firstly, preprocessing a clean original picture x;
inputting the preprocessed picture into a generator G to obtain antagonistic noise G (x); smoothing the antagonistic noise G (x) by two-dimensional convolution kernel Gaussian filtering, and smoothing the smoothed antagonistic noise xadvAdding the picture to the original picture x before preprocessing to form a confrontation picture
Figure BDA0003427021390000021
Step three, the confrontation picture
Figure BDA0003427021390000022
Inputting the loss into a discriminator D, and calculating the GAN loss;
the GAN loss is as in equation (1):
Figure BDA0003427021390000023
wherein D (x) represents the result of discrimination of the original picture x;
Figure BDA0003427021390000024
representing a Pair contrast Picture
Figure BDA0003427021390000025
The result of the discrimination of (1); exRepresents the mathematical distribution expectation of x.
Step four, the confrontation picture
Figure BDA0003427021390000026
Inputting the data into a target network to be attacked, and calculating the resistance loss Ladv
The countermeasure loss comprises a confidence loss and a regression loss;
the confidence loss is used for classifying the region of interest in the picture as a background; the confidence loss function is as follows (2):
Figure BDA0003427021390000027
where M represents the total number of proposed frames, l, of the target network to be attackedBCERepresenting a binary cross entropy loss, CiA confidence score representing the ith suggestion box;
the regression loss is used for interfering the detected object to make the position of the detection frame deviate from the true value, and the specific method is to assign a fixed position coefficient (delta x) to the suggestion frame detected by the target network to be attackedj,Δyj,Δwj,Δhj) Then, calculating the distance between the position coefficient of the proposed frame output by the target network and the artificially set target position coefficient, wherein the regression loss function is as the following formula (3):
Figure BDA0003427021390000028
wherein z isjE {0,1}, represents the existence of the object in the jth suggestion box,
Figure BDA0003427021390000029
a position coefficient threshold value representing an artificially set target; (Δ x)j,Δyj,Δwj,Δhj) Position coefficient representing the jth suggestion box, where (Δ x)j,Δyj) Coordinates representing the centre point of the jth suggestion box, Δ wj,ΔhjWidth and height of the jth suggestion box;
therefore, the resistance loss is as in formula (4):
Ladv=Lconfidence+μ·Lregression (4)
where μ is a hyperparameter balancing the confidence loss and the positional regression loss.
Step five, calculating the Euclidean distance between the generated countermeasure picture and the original picture to obtain the disturbance loss Lperturb(ii) a Simultaneously inputting the confrontation picture into a feature extraction network in a target network to be attacked, and calculating the loss L of a feature layerfeature
Step six: calculating total loss according to the loss obtained in the third, fourth and fifth steps, then minimizing the total loss through back propagation, and updating parameters of a generator and a discriminator to obtain a training weight;
Lall=α/Lfeature+LGAN+β·Ladv+γ·Lperturb (5)
wherein, alpha, beta and gamma are Lfeature、Ladv、LperturbThe weight coefficient of (2).
And step seven, loading the training weight obtained in the step six by the generator, and generating a corresponding countermeasure picture from the input original picture.
Preferably, the disturbance loss L isperturbThe method is used for limiting the noise generated by the generator, and is calculated according to the formula (6):
Lperturb=Ex(||G(x)||2) (6)
in the formula | · | non-conducting phosphor2Is L2Norm, which is used to limit the magnitude of the generated noise.
Preferably, the characteristic layer loss L isfeatureThe method comprises the following steps of sending a generated countermeasure picture and an original picture into a feature extractor, extracting features of each layer, calculating the distance between the two layers, maximizing a loss function during training, and enabling the features of the generated countermeasure picture to be far away from the features of the original picture so as to further attack, wherein the feature layer loss function is expressed as a formula (7):
Figure BDA0003427021390000031
wherein F represents the total number of feature layers in the feature extractor, F represents the F-th feature layer of the set, and T (L (x, F)) represents the strong normalization of the F-th feature map extracted from the original picture x.
Preferably, the generator G comprises seven convolutional blocks, a convolutional layer and a Tanh activation function in sequence, wherein each convolutional block comprises a convolutional layer and a LeakyReLU activation function in sequence.
Preferably, the discriminator comprises a convolutional layer, a LeakyReLU activation function, three convolutional blocks, a convolutional layer and a Sigmoid layer in sequence. Wherein each convolutional block comprises a convolutional layer, a BN layer, and a leakage relu activation function in turn.
Preferably, in the training process, the generator and the arbiter adopt an alternate training mode: firstly, training the discriminator once, updating the parameters of the discriminator, then training the generator for multiple times, and finally realizing the parameter updating of the generator.
Preferably, the two-dimensional convolution kernel gaussian filtering operation is as shown in formula (9):
xadv=τk*G(x) (9)
in the formula, τkA gaussian kernel representing a kernel size k; x is the number ofadvThe noise generated by the gaussian kernel of k × k is convolved to be smoothed.
Preferably, the feature extraction network is a backbone network of a target network to be attacked.
It is another object of the present invention to provide a computing device comprising a memory having stored therein executable code and a processor that, when executing the executable code, implements the method described above.
The invention has the beneficial effects that:
1. the invention provides a target detection counterattack based on GAN, which converts the traditional iterative optimization attack method based on gradient into a generation mechanism so as to improve the generation speed of countersamples and greatly save the calculation cost.
2. The method provided by the invention introduces confidence loss and position loss of target detection to train a generator, so that the generated countermeasure sample can effectively deceive a target detection model; meanwhile, in order to solve the defect that noise generated by the GAN network training is too obvious, Gaussian filtering is connected to the output end of the generator, and the noise with too large disturbance is filtered, so that the generated noise is smoother and is not easy to be perceived by human eyes.
3. The method provided by the invention is not only greatly higher than the traditional target detection attack method based on gradient in the generation speed, but also better than the traditional methods in the attack effect on the detector.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a graph of the effect of the attack against the sample generated in the present invention.
Fig. 3 is a graph comparing the results of the method proposed by the present invention with other methods.
Detailed Description
The invention is further analyzed with reference to the following specific examples.
As shown in fig. 1, a target detection attack method includes the following steps:
step one, the picture x used for training in the present invention is all pictures of train + val of the public data set VOC 2007. Before the pictures are input to the generator, the pictures are first resize to 300 x 300 while the corresponding label positions and sizes are modified. In addition, the target network FasterR-CNN is trained to a better weight for detecting objects, participates in later training and testing, and takes the feature extraction network VGG16 of FasterR-CNN as a feature extractor.
Inputting the preprocessed picture into a generator G to obtain antagonistic noise G (x); smoothing the antagonistic noise G (x) by two-dimensional convolution kernel Gaussian filtering, and smoothing the smoothed antagonistic noise xadvAdding the picture to the original picture x before preprocessing to form a confrontation picture
Figure BDA0003427021390000051
Step three, the confrontation picture
Figure BDA0003427021390000052
Inputting the loss into a discriminator D, and calculating the GAN loss;
the GAN loss is as in equation (10):
Figure BDA0003427021390000053
wherein D (x) represents the result of discrimination of the original picture x;
Figure BDA0003427021390000054
representing a Pair contrast Picture
Figure BDA0003427021390000055
The result of the discrimination of (1); exRepresents the mathematical distribution expectation of x. Judging a real sample (namely an original picture x), wherein the judgment result of a discriminator is expected to be better as being closer to 1, so that the loss function is logD (x); for the generated confrontation picture, the discrimination result of the discriminator is desired
Figure BDA0003427021390000056
The closer to 0, the better, so its loss function is
Figure BDA0003427021390000057
Step four, the confrontation picture
Figure BDA0003427021390000058
Inputting the result into a target network FasterR-CNN to be attacked, and calculating the resistance loss Ladv
The countermeasure loss comprises a confidence loss and a regression loss;
the confidence loss is used for classifying the region of interest in the picture as a background; the confidence loss function is as follows (11):
Figure BDA0003427021390000059
where M represents the total number of proposed frames detected by the RPN network, these filtered proposed frames are often considered to contain objects, lBCERepresenting a binary cross entropy loss, CiA confidence score representing the ith suggestion box;
the regression loss is used for interfering the detected object and enabling the position of the detection frame to deviate from a true value, and the specific method is that a fixed position coefficient is appointed for the suggestion frame detected by the target network to be attacked, then the distance between the position coefficient of the suggestion frame output by the target network and the fixed position coefficient is calculated, and the regression loss function is as shown in formula (12):
Figure BDA00034270213900000510
wherein z isj∈{0,1},zj1 represents that the jth suggestion box contains objects, otherwise no objects,
Figure BDA00034270213900000511
a position coefficient threshold value representing an artificially set target; (Δ x)j,Δyj,Δwj,Δhj) Position coefficient representing the jth suggestion box, where (Δ x)j,Δyj) Indicates the jth suggestion box centerCoordinates of points,. DELTA.wj,ΔhjWidth and height of the jth suggestion box;
therefore, the overall formula for the penalty loss contains two losses, the formula is given by formula (13):
Ladv=Lconfidence+μ·Lregression (13)
where μ is a hyperparameter balancing the confidence loss and the positional regression loss and is empirically set to 0.4.
Step five, calculating the Euclidean distance between the generated countermeasure picture and the original picture to obtain the disturbance loss Lperturb(ii) a Simultaneously inputting the confrontation picture into a feature extraction network in a target network to be attacked, and calculating the loss L of a feature layerfeature
Step six: and (4) solving the total loss of the losses obtained in the third step, the fourth step and the fifth step according to an equation (14), minimizing the total loss through back propagation, and updating parameters of a generator and a discriminator to obtain the training weight.
Lall=α/Lfeature+LGAN+β·Ladv+γ·Lperturb (14)
Wherein, alpha, beta and gamma are Lfeature、Ladv、LperturbThe weight coefficient of (2).
And step seven, loading training weights by the generator, and generating corresponding countermeasure pictures from the input original pictures.
Said disturbance loss LperturbUsed to limit the amount of noise generated by the generator is L for the noise2Distance calculation, the calculation mode is as follows (15):
Lperturb=Ex(||G(x)||2) (15)
in the formula | · | non-conducting phosphor2Is L2Norm, which is used to limit the size of the generated noise;
the characteristic layer loss LfeatureThe method comprises the steps of sending generated countermeasure pictures and original pictures into a feature extractor, extracting features of each layer, calculating the distance between the features, maximizing the loss function during training, and enabling the features of the generated countermeasure pictures to be far away from the features of the original picturesAnd thus further aggressive, the feature layer loss function is expressed as equation (16):
Figure BDA0003427021390000061
in the formula, F represents the total number of feature layers in the feature extractor, in this example, all layers of target feature extraction are selected, F represents the F-th feature layer of the set, and T (L (x, F)) represents strong normalization of the F-th feature map extracted from the original picture x.
The generator G comprises seven convolution blocks, a convolution layer and a Tanh activation function in sequence, wherein each convolution block comprises a convolution layer and an LeakyReLU activation function in sequence.
The discriminator comprises a convolution layer, a LeakyReLU activation function, three convolution blocks, a convolution layer and a Sigmoid layer in sequence. Wherein each convolutional block comprises a convolutional layer, a BN layer, and a leakage relu activation function in turn.
In the training process, the generator and the discriminator adopt an alternate training mode: firstly, training the discriminator once, updating the parameters of the discriminator, then training the generator for multiple times, and finally realizing the parameter updating of the generator.
The two-dimensional convolution kernel Gaussian filtering is to utilize a Gaussian kernel with a certain size to convolute generated noise to remove high-frequency information. The benefits of this are two: 1) the generated noise can be smoothed by the gaussian filtering operation, and some protruding noise can be filtered out. The processed noise is added to the original image x to form a confrontation picture
Figure BDA0003427021390000072
The perception is much better than that of the untreated; 2) and the attack performance can be more effectively improved by removing the high-frequency disturbance and leaving the low-frequency disturbance. The gaussian filtering operation is shown in equation (17):
xadv=τk*G(x) (17)
in the formula, τkIndicates a kernel size of k highA kernel, k in this example is selected to be 5; x is the number ofadvThe noise generated by the gaussian kernel of k × k is convolved to be smoothed.
The feature extraction network is shared by a backbone network VGG16 of the Faster R-CNN, and the weight used by VGG16 and the backbone network of the Faster R-CNN. Fig. 2 is a graph of the effect of the attack against the sample generated in the present invention.
In order to verify the effectiveness of the invention, the existing various anti-attack methods aiming at the target detection model are as follows: DAG, RAP, UEA, etc. were subjected to comparative experiments. The data set used for the experiment was paschaloc 2007. The paschaloc 2007 dataset is divided into four major classes: vehicle, household, animal, person, on this basis, can be subdivided into 20 subclasses, 9963 pictures in total. Each picture is associated with a corresponding XML file containing the location and type of the picture object. The whole data set consists of three parts of train/val/test, two parts of train + val are selected as a GAN training set in the experiment, 5011 pictures are used in total, and a test part of VOC2007 is selected as a test set.
In order to verify the effectiveness of the Attack method provided by the invention, an Attack Success Rate (ASR) is selected as an evaluation index which represents the change condition of mAP (mean average precision) before and after the Attack, and the mAP represents the average value of the average precision Rate of all categories, which is the most important index for measuring the detection effect of the target detector. ASR is calculated as formula (18):
Figure BDA0003427021390000071
in the formula mAPcleanmAP, mAP representing target detector before attackattackFor the mAP of the target detector after attack, the higher the ASR, the stronger the attack.
In order to measure the time cost of the attack method for generating the countermeasure sample. And selecting the whole test set to generate a countermeasure sample, and taking an average value as a time cost evaluation index for measuring the attack method.
To evaluate the attack method proposed by the present inventionGenerating a noise magnitude of the challenge sample using L2The distance was used as an evaluation index.
The results of the experiment are shown in table 1. The method has the highest attack success rate which reaches 97.8 percent, is 10 percent higher than that of a UEA model and is 5.75 percent higher than that of a DAG model which is the second highest. In terms of generation speed, the method provided by the invention and the UEA method based on GAN at the same time are both 0.06s, which is far faster than other methods. On the generated disturbance, the method provided by the invention is higher than DAG and RAP on the L2 distance index of noise, because the method is not generated in a back propagation 'special' way for each picture in actual generation, but the countermeasure sample generated by the method is lower than UEA on the L2 distance index, and the method can generate a picture with better quality than UEA as shown in combination with FIG. 3. Generally, the method provided by the invention is superior to the existing methods, and can effectively attack the target detection task.
TABLE 1 results of attacks on FasterR-CNN by different methods
Figure BDA0003427021390000081

Claims (10)

1. A target detection attack method is characterized by comprising the following steps:
firstly, preprocessing a clean original picture x;
inputting the preprocessed picture into a generator G to obtain antagonistic noise G (x); smoothing the antagonistic noise G (x) by two-dimensional convolution kernel Gaussian filtering, and smoothing the smoothed antagonistic noise xadvAdding the picture to the original picture x before preprocessing to form a confrontation picture
Figure FDA0003427021380000011
Figure FDA0003427021380000012
Step three, the pairsAnti-picture
Figure FDA0003427021380000013
Inputting the loss into a discriminator D, and calculating the GAN loss;
the GAN loss is as in equation (1):
Figure FDA0003427021380000014
wherein D (x) represents the discrimination result of the original picture x;
Figure FDA0003427021380000015
representing a Pair contrast Picture
Figure FDA0003427021380000016
The result of the discrimination of (1); exRepresents a mathematical distribution expectation of x;
step four, the confrontation picture
Figure FDA00034270213800000110
Inputting the data into a target network to be attacked, and calculating the resistance loss Ladv
The countermeasure loss comprises a confidence loss and a regression loss;
the confidence loss is used for classifying the region of interest in the picture as a background; the confidence loss function is as follows (2):
Figure FDA0003427021380000017
where M represents the total number of proposed frames for the target network to be attacked, lBCERepresenting a binary cross entropy loss, CiA confidence score representing the ith suggestion box;
the regression loss is used for interfering the detected object, so that the position of the detection frame deviates from the true value; the regression loss function is as follows (3):
Figure FDA0003427021380000018
wherein z isjE 0,1 represents the presence of an object in the jth proposed box,
Figure FDA0003427021380000019
a position coefficient threshold representing a target; (Δ x)j,Δyj,Δwj,Δhj) Position coefficient representing the jth suggestion box, where (Δ x)j,Δyj) Coordinates representing the centre point of the jth suggestion box, Δ wj,ΔhjWidth and height of the jth suggestion box;
the countermeasure loss is as follows:
Ladv=Lconfidence+μ·Lregression (4)
wherein μ is a hyperparameter balancing confidence loss and position regression loss;
step five, calculating the Euclidean distance between the generated countermeasure picture and the original picture to obtain the disturbance loss Lperturb(ii) a Simultaneously inputting the confrontation picture into a feature extraction network in a target network to be attacked, and calculating the loss L of a feature layerfeature
Step six: calculating total loss according to the loss obtained in the third, fourth and fifth steps, then minimizing the total loss through back propagation, and updating parameters of a generator and a discriminator to obtain a training weight;
Lall=α/Lfeature+LGAN+β·Ladv+γ·Lperturb (5)
wherein alpha, beta and gamma are respectively Lfeature、Ladv、LperturbThe weight coefficient of (a);
and step seven, loading the training weight obtained in the step six by the generator, and generating a corresponding confrontation picture from the original picture.
2. A method of target detection attack according to claim 1,the method is characterized in that: said disturbance loss LperturbUsed to limit the amount of noise generated by the generator is L for the noise2And (3) distance calculation:
Lperturb=Ex(||G(x)||2) (6)
in the formula | · | non-conducting phosphor2Is L2Norm, which is used to limit the magnitude of the generated noise.
3. The method of claim 1, wherein the target detection attack comprises: the characteristic layer loss LfeatureIs calculated as follows:
Figure FDA0003427021380000021
wherein F represents the total number of feature layers in the feature extractor, F represents the F-th feature layer of the set, and T (L (x, F)) represents the strong normalization of the F-th feature map extracted from the original picture x.
4. The method of claim 1, wherein the target detection attack comprises: the generator G comprises seven convolutional blocks, a convolutional layer and a Tanh activation function in sequence, wherein each convolutional block comprises a convolutional layer and an leakyreu activation function.
5. The method of claim 1 or 4, wherein: the discriminator comprises a convolution layer, a LeakyReLU activation function, three convolution blocks, a convolution layer and a Sigmoid layer in sequence; wherein each convolutional block comprises a convolutional layer, a BN layer, and a leakage relu activation function in turn.
6. The method of claim 1, wherein the target detection attack comprises: in the training process, the generator and the discriminator adopt an alternate training mode: firstly, training the discriminator once, updating the parameters of the discriminator, then training the generator for multiple times, and finally realizing the parameter updating of the generator.
7. The method of claim 1, wherein the target detection attack comprises: the two-dimensional convolution kernel Gaussian filtering operation in the second step is shown as the formula (9):
xadv=τk*G(x) (9)
wherein tau iskA gaussian kernel representing a kernel size k; x is the number ofadvThe noise generated by the gaussian kernel of k × k is convolved to be smoothed.
8. The method of claim 1, wherein the target detection attack comprises: the feature extraction network is a backbone network of a target network to be attacked.
9. The method of claim 8, wherein the target detection attack comprises: the target network to be attacked is FasterR-CNN, and the feature extraction network is VGG 16.
10. A computing device comprising a memory having stored therein executable code and a processor that, when executing the executable code, implements the method of any of claims 1-9.
CN202111580489.0A 2021-12-22 2021-12-22 Target detection attack method and device Pending CN114330652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111580489.0A CN114330652A (en) 2021-12-22 2021-12-22 Target detection attack method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111580489.0A CN114330652A (en) 2021-12-22 2021-12-22 Target detection attack method and device

Publications (1)

Publication Number Publication Date
CN114330652A true CN114330652A (en) 2022-04-12

Family

ID=81055390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111580489.0A Pending CN114330652A (en) 2021-12-22 2021-12-22 Target detection attack method and device

Country Status (1)

Country Link
CN (1) CN114330652A (en)

Similar Documents

Publication Publication Date Title
CN113554089B (en) Image classification countermeasure sample defense method and system and data processing terminal
CN109190665B (en) Universal image classification method and device based on semi-supervised generation countermeasure network
CN110941794B (en) Challenge attack defense method based on general inverse disturbance defense matrix
CN111753881B (en) Concept sensitivity-based quantitative recognition defending method against attacks
CN110348475B (en) Confrontation sample enhancement method and model based on spatial transformation
CN112396129B (en) Challenge sample detection method and universal challenge attack defense system
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN111915486B (en) Confrontation sample defense method based on image super-resolution reconstruction
CN115860112B (en) Model inversion method-based countermeasure sample defense method and equipment
CN112085055A (en) Black box attack method based on migration model Jacobian array feature vector disturbance
CN114387449A (en) Image processing method and system for coping with adversarial attack of neural network
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
CN116912568A (en) Noise-containing label image recognition method based on self-adaptive class equalization
CN114626042A (en) Face verification attack method and device
Kherchouche et al. Detect and defense against adversarial examples in deep learning using natural scene statistics and adaptive denoising
Xu et al. ASQ-FastBM3D: an adaptive denoising framework for defending adversarial attacks in machine learning enabled systems
CN113822443A (en) Method for resisting attack and generating resisting sample
CN111950635A (en) Robust feature learning method based on hierarchical feature alignment
CN114330652A (en) Target detection attack method and device
CN113487506B (en) Attention denoising-based countermeasure sample defense method, device and system
CN115375966A (en) Image countermeasure sample generation method and system based on joint loss function
CN113837360B (en) DNN robust model reinforcement method based on relational graph
CN113221858B (en) Method and system for defending face recognition against attack
Chen et al. An image denoising method of picking robot vision based on feature pyramid network
CN113205082B (en) Robust iris identification method based on acquisition uncertainty decoupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination