CN114841983A - Image countermeasure sample detection method and system based on decision score - Google Patents
Image countermeasure sample detection method and system based on decision score Download PDFInfo
- Publication number
- CN114841983A CN114841983A CN202210556274.3A CN202210556274A CN114841983A CN 114841983 A CN114841983 A CN 114841983A CN 202210556274 A CN202210556274 A CN 202210556274A CN 114841983 A CN114841983 A CN 114841983A
- Authority
- CN
- China
- Prior art keywords
- model
- decision score
- decision
- training
- tiny
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 74
- 238000000034 method Methods 0.000 claims abstract description 64
- 210000002569 neuron Anatomy 0.000 claims abstract description 29
- 238000012360 testing method Methods 0.000 claims abstract description 22
- 238000007781 pre-processing Methods 0.000 claims abstract description 18
- 230000006870 function Effects 0.000 claims description 56
- 230000004913 activation Effects 0.000 claims description 50
- 238000012545 processing Methods 0.000 claims description 32
- 238000004364 calculation method Methods 0.000 claims description 17
- 238000011176 pooling Methods 0.000 claims description 17
- 238000005457 optimization Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 238000009795 derivation Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 7
- 230000007123 defense Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000003042 antagnostic effect Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 241001522316 Pyrrhula pyrrhula Species 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Mathematical Physics (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method and a system for detecting an image confrontation sample based on a decision score. The method comprises the following steps: the method comprises the steps of obtaining an image data set, preprocessing the image data set, training different data sets by using different model structures, selecting specific layers in each model to calculate decision scores of neurons, respectively building different binary classifiers aiming at different data sets, then training the binary classifiers corresponding to different data sets by using the decision scores calculated by the specific layers in each model, inputting the decision scores of a countermeasure sample and a benign sample into the trained binary classifiers for testing, and optimizing the binary classifiers if the classification precision is insufficient. The scheme provided by the invention starts from the neuron in the model, calculates the decision score of the model through a small amount of samples, trains a simple binary classifier, and realizes high-precision and low-cost detection of the confrontation sample by utilizing the difference of the decision scores of the benign sample and the confrontation sample.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a method and a system for detecting an image confrontation sample based on a decision score.
Background
The artificial intelligence technology is rapidly developed, and the achievement of the deep neural network model in the image processing field stimulates more manpower and material resources to be put into the research field of computer vision. Among them, the convolutional neural network is particularly prominent in image data processing, especially various pattern recognition tasks. At present, the method is deployed in tasks such as image classification, defect detection, semantic segmentation, image restoration and the like, and is widely applied to scenes such as automatic driving, face recognition, medical image analysis and the like.
However, recent research finds that the depth model cannot controllably determine the training of the system due to the data-driven characteristics of the depth model, and the nonlinearity of the depth model in the high-dimensional feature space causes the depth model to be easily affected by the attack countermeasure, thereby preventing the depth model from being further applied and developed in the scene with higher security level requirements. Artificial intelligence gradually replaces human beings in numerous fields to carry out autonomous decision making, and if data and an algorithm have a security vulnerability problem, serious personal injury and property loss are brought. Therefore, research on defense and detection technologies aiming at antagonistic samples is carried out to improve the robustness and safety of the model facing unknown threats, and the method has important significance for improving the reliability of the application process and realizing safe and credible artificial intelligence algorithm.
In the training process of the machine learning model, an attacker can influence the accuracy of the model by carefully modifying the existing training data or continuously generate new confrontation samples to make the model prediction wrong. False and spurious countermeasure samples can cause the recognition system driven by artificial intelligence to have misjudgment or misjudgment, the carefully modified countermeasure samples such as malicious software, disturbed pictures and the like can cause the classifier to have wrong recognition, objects which are difficult to recognize are placed on the roadside, the passing automobile can enter a safety protection mode, and the usability of the automatic driving system is damaged. According to whether the model structure of an attack target is known, white box attack and black box attack are divided; aiming at the original expectation of an attacker, the method comprises the following steps of dividing target attack and non-target attack; according to different countercheck samples, the method is divided into virtual digital space attack and real physical space attack. Various methods of combating attacks have been proposed: goodfellow et al propose Fast Gradient Sign Method (FGSM) to generate perturbations of the anti-attack effect by Fast search in the Gradient direction. Kurakin et al propose a basic iterative method, iteratively calculate disturbance by using a small search step, and expand BIM into an iteration minimum possible class method to obtain an anti-sample with stronger anti-attack performance and weaker migration performance. Moevavi-dezfool et al propose a depfol method that can achieve an attack effect similar to FGSM with less disturbance. Paperot et al propose jacobian-based saliency maps to attack JSMA, and implement pixel-level attacks by limiting the zero norm of the perturbation. Chen et al designed a zero order optimization attack method ZOO that directly estimates the gradient of the target model to produce a challenge sample.
The efficient and convenient defense technology is guaranteed for realizing further wide and reliable application of the deep learning model. According to the defense effect, the defense method can be divided into detection defense and complete defense, and the detection method rejects the detected countermeasure sample through the difference between the countermeasure sample and the normal sample so as to early warn the potential threat. Hendrycks and Gimpel proposed that there is a difference in softmax distribution between benign input and challenge input, and challenge detection was achieved by measuring the relative entropy between the uniform distribution and the softmax distribution. Xu et al propose the use of feature compression to detect antagonistic perturbations of an image. And performing countermeasure sample detection on the prediction results of the original image and the compressed image by comparing the target network. Cohen et al combine the k-Nearest Neighbor algorithm with the Influence function to extract the Nearest Neighbor Influence Function (NNIF), so as to realize antagonistic sample detection. Meng et al propose a challenge detection framework, MagNet, that uses one or more external detectors to classify an input image as a challenge sample or a benign sample, aiming to learn the manifold of the benign sample for the detection of the challenge sample. Ma et al propose a countermeasure detection method based on Local Intrinsic Dimensions (LID), which trains a detector using Intrinsic dimensional characteristics of a sample to realize detection of a countermeasure sample.
Although existing detection methods work well, they still face the following challenges:
(1) most of the prior knowledge based on the countermeasure samples needs a large number of countermeasure samples to obtain the overall rule of output confidence coefficient or hidden layer characteristics, the dependency on training data is strong, and the calculation time cost is high;
(2) the partial detection method divides the anti-sample and the benign sample by depending on the artificially set threshold value, has very high parameter sensitivity and low generalization capability;
(3) the detection method based on the pretreatment depends on the output difference of the models before and after the treatment, the detection effect is highly related to the parameters of the pretreatment method, the antagonistic sample with large disturbance is difficult to detect, and the detection range is limited.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a technical solution of a method and a system for detecting an image confrontation sample based on a decision score, so as to solve the above technical problems.
The invention discloses a method for detecting an image confrontation sample based on a decision score in a first aspect, which comprises the following steps:
step S1, acquisition and preprocessing of the image dataset: acquiring CIFAR-10, GTSRB and tiny-ImageNet data sets, and preprocessing the CIFAR-10, GTSRB and tiny-ImageNet data sets;
step S2, training the neural network: training different data sets by using different model structures, namely training the CIFAR-10 data set by using a VGG19 model, training the GTSRB data set by using a LeNet-5 model, and training the tiny-ImageNet data set by using a MobileNet V1 model;
step S3, calculating a decision score: selecting a specific layer in each model for the VGG19 model, the LeNet-5 model and the MobileNet V1 model to calculate the decision score of the neuron;
step S4, training a binary classifier: respectively building different binary classifiers aiming at the CIFAR-10, GTSRB and tiny-ImageNet data sets, and then training the binary classifiers corresponding to the CIFAR-10, GTSRB and tiny-ImageNet data sets by using decision scores calculated by specific layers in each model;
step S5, parameter optimization of the binary classifier: inputting the decision score of the confrontation sample into a trained binary classifier for testing, and optimizing the binary classifier if the classification precision is insufficient;
and inputting the decision score of the benign sample into a binary classifier, and optimizing the binary classifier if the classification precision is insufficient.
According to the method of the first aspect of the present invention, in step S1, the method for preprocessing the CIFAR-10, GTSRB and tiny-ImageNet datasets includes:
for the GTSRB data set, randomly extracting 30% of pictures from each type to serve as a test set, and taking the rest pictures as a training set; for the tiny-ImageNet data set, 11000 pictures are used for training, and 2000 pictures are used for testing the precision; and carrying out one-hot coding on the class mark.
According to the method of the first aspect of the present invention, in the step S3, the method for selecting a specific layer in each model to calculate a decision score of a neuron includes:
a flatten layer or a global average pooling layer intermediate the convolution and the fully connected layer is selected as the selected layer for decision score calculation.
According to the method of the first aspect of the present invention, in step S3, the specific method for selecting a vertex layer or a global average pooling layer between the convolution layer and the fully-connected layer as a calculation layer of the decision score includes:
selecting a flat layer for the VGG19 model and the LeNet-5 model;
and selecting a global average pooling layer for the MobileNet V1 model.
According to the method of the first aspect of the present invention, in the step S3, the method of calculating the decision score includes:
wherein,
μ (x) is the decision score;
x is an image sample;
y c representing the confidence of the model prediction class label c for the input x;
According to the method of the first aspect of the present invention, in step S4, the method for respectively building different binary classifiers for the CIFAR-10, GTSRB and tiny-ImageNet datasets includes:
for the CIFAR-10 dataset, the corresponding binary classifier has a 512 full-connection layer, the activation function selects relu, the full-connection layers of 2 neurons are superposed, and the activation function is softmax;
for the GTSRB data set, the corresponding binary classifier structure is 1024 full-join, the activation function selects relu, 512 full-join, the activation function selects relu, and finally a full-join layer of 2 neurons, wherein the activation function is softmax;
for the tiny-ImageNet data set, the corresponding binary classifier structure is 1024 full-link, the activation function selects relu, two identical 512 full-link layers are overlapped, the activation function is relu, and finally the full-link layer with the activation function of softmax is used.
According to the method of the first aspect of the present invention, in step S4, the method for generating the confrontation sample in the decision score of the confrontation sample applied by the binary classifier for training the CIFAR-10, GTSRB and tiny-ImageNet datasets includes:
using the countermeasure samples generated by the FGSM to obtain a decision score of a generic countermeasure sample;
in step S5, the method for generating the confrontation sample in the decision score of the confrontation sample in the parameter optimization of the binary classifier includes:
the challenge samples generated using BIM and JSMA, and the decision score of the challenge sample for parameter optimization is calculated.
The invention discloses a system for detecting image confrontation samples based on decision scores in a second aspect, which comprises:
a first processing module configured for acquisition and pre-processing of an image dataset: acquiring CIFAR-10, GTSRB and tiny-ImageNet data sets, and preprocessing the CIFAR-10, GTSRB and tiny-ImageNet data sets;
a second processing module configured to train a neural network: training different data sets by using different model structures, namely training the CIFAR-10 data set by using a VGG19 model, training the GTSRB data set by using a LeNet-5 model, and training the tiny-ImageNet data set by using a MobileNet V1 model;
a third processing module configured to compute a decision score: selecting specific layers in each model to calculate decision scores of neurons for the VGG19 model, the LeNet-5 model and the MobileNetV1 model;
a fourth processing module configured to train a binary classifier: respectively building different binary classifiers aiming at the CIFAR-10, GTSRB and tiny-ImageNet data sets, and then training the binary classifiers corresponding to the CIFAR-10, GTSRB and tiny-ImageNet data sets by using decision scores calculated by specific layers in each model;
a fifth processing module configured to optimize parameters of the binary classifier: inputting the decision score of the confrontation sample into a trained binary classifier for testing, and optimizing the binary classifier if the classification precision is insufficient;
and (4) inputting the decision score of the benign sample into a binary classifier, and optimizing the binary classifier if the classification precision is insufficient.
According to the system of the second aspect of the present invention, the first processing module configured to pre-process the CIFAR-10, GTSRB and tiny-ImageNet datasets comprises:
for the GTSRB data set, randomly extracting 30% of pictures from each type to serve as a test set, and taking the rest pictures as a training set; for the tiny-ImageNet data set, 11000 pictures are used for training, and 2000 pictures are used for testing the precision; and carrying out one-hot coding on the class mark.
According to the system of the second aspect of the present invention, the third processing module is configured to select a specific layer in each model to calculate the decision score of the neuron, including:
a flatten layer or a global average pooling layer intermediate the convolution and the fully connected layer is selected as the selected layer for decision score calculation.
According to the system of the second aspect of the present invention, the third processing module is configured to select a vertex layer or a global average pooling layer between the convolution layer and the fully-connected layer as the calculation layer of the decision score specifically includes:
selecting a flat layer for the VGG19 model and the LeNet-5 model;
and selecting a global average pooling layer for the MobileNet V1 model.
According to the system of the second aspect of the present invention, the third processing module configured to calculate the decision score comprises:
wherein,
μ (x) is the decision score;
x is an image sample;
y c representing the confidence of the model prediction class label c for the input x;
According to the system of the second aspect of the present invention, the fourth processing module configured to build different binary classifiers for the CIFAR-10, GTSRB and tiny-ImageNet datasets respectively includes:
for the CIFAR-10 dataset, the corresponding binary classifier has a 512 full-connection layer, the activation function selects relu, the full-connection layers of 2 neurons are superposed, and the activation function is softmax;
for the GTSRB data set, the corresponding binary classifier structure is 1024 full connection, the activation function selects relu, 512 full connection, the activation function selects relu, and finally a full connection layer of 2 neurons, wherein the activation function is softmax;
for the tiny-ImageNet data set, the corresponding binary classifier structure is 1024 full-link, the activation function selects relu, two identical 512 full-link layers are overlapped, the activation function is relu, and finally the full-link layer with the activation function of softmax is used.
According to the system of the second aspect of the present invention, the fourth processing module is configured to train the generation of the countermeasure sample in the decision score of the countermeasure sample applied by the binary classifier corresponding to the CIFAR-10, GTSRB and tiny-ImageNet datasets, and includes:
using the countermeasure samples generated by the FGSM to obtain a decision score of a generic countermeasure sample;
according to the system of the second aspect of the present invention, the fifth processing module is configured to generate the confrontation samples in the decision scores of the confrontation samples in the parameter optimization of the binary classifier, including:
the challenge samples generated using BIM and JSMA, and the decision score of the challenge sample for parameter optimization is calculated.
A third aspect of the invention discloses an electronic device. The electronic device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the steps of the image confrontation sample detection method based on decision score in any one of the first aspect of the disclosure when executing the computer program.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of a method for detecting a countermeasure sample for an image based on a decision score according to any one of the first aspect of the present disclosure.
The scheme provided by the invention starts from the neuron in the model, calculates the decision score of the model through a small amount of samples, trains a simple binary classifier, and realizes high-precision and low-cost detection of the confrontation sample by utilizing the difference of the decision scores of the benign sample and the confrontation sample.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for detecting an image confrontation sample based on a decision score according to an embodiment of the invention;
FIG. 2 is a block diagram of a method for detecting image confrontation samples based on decision scores according to an embodiment of the invention;
FIG. 3 is a block diagram of an overall framework of a method for detecting an image confrontation sample based on a decision score according to an embodiment of the invention;
FIG. 4 is a block diagram of an image confrontation sample detection system based on decision scores according to an embodiment of the invention;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses a method for detecting an image confrontation sample based on a decision score. Fig. 1 is a flowchart of an image confrontation sample detection method based on decision scores according to an embodiment of the present invention, as shown in fig. 1, 2 and 3, the method includes:
step S1, acquisition and preprocessing of the image dataset: acquiring CIFAR-10, GTSRB and tiny-ImageNet data sets, and preprocessing the CIFAR-10, GTSRB and tiny-ImageNet data sets;
step S2, training the neural network: training different data sets by using different model structures, namely training the CIFAR-10 data set by using a VGG19 model, training the GTSRB data set by using a LeNet-5 model, and training the tiny-ImageNet data set by using a MobileNet V1 model;
step S3, calculating a decision score: selecting a specific layer in each model for the VGG19 model, the LeNet-5 model and the MobileNet V1 model to calculate the decision score of the neuron;
step S4, training a binary classifier: respectively building different binary classifiers aiming at the CIFAR-10, GTSRB and tiny-ImageNet data sets, and then training the binary classifiers corresponding to the CIFAR-10, GTSRB and tiny-ImageNet data sets by using decision scores calculated by specific layers in each model;
step S5, optimizing parameters of the binary classifier: inputting the decision score of the confrontation sample into a trained binary classifier for testing, and optimizing the binary classifier if the classification precision is insufficient;
and (4) inputting the decision score of the benign sample into a binary classifier, and optimizing the binary classifier if the classification precision is insufficient.
In step S1, acquisition and preprocessing of the image dataset: and acquiring CIFAR-10, GTSRB and tiny-ImageNet data sets, and preprocessing the CIFAR-10, GTSRB and tiny-ImageNet data sets.
In some embodiments, in the step S1, the method for preprocessing the CIFAR-10, GTSRB and tiny-ImageNet data sets includes:
for the GTSRB data set, randomly extracting 30% of pictures from each type to serve as a test set, and taking the rest pictures as a training set; for the tiny-ImageNet dataset, 11,000 pictures were used for training, 2,000 pictures were used for testing accuracy; and carrying out one-hot coding on the class mark.
Specifically, the method was performance validated using CIFAR-10, GTSRB and tiny-ImageNet data sets. The CIFAR-10 dataset consisted of 60,000 color images of size 32X 32, for 10 classes, 6,000 images per class, a training set of 50,000, and a test set of 10,000. The GTSRB data set includes over 50,000 48 × 48 german traffic signal pictures for a total of 43 categories. The Tiny-ImageNet dataset is a subset of the ImageNet dataset, and 13,000 images each belong to 10 classes, each sample size being 224X 3. And storing the image samples and corresponding class labels thereof, wherein the sample set is marked as X ═ { X1, X2, …, xm }, and the class label of each picture is marked as y.
For GTSRB datasets, 30% were randomly drawn from each class as test sets, with the remaining pictures as training sets. For the tiny-ImageNet dataset, 11,000 samples were used for training and 2,000 were used for testing accuracy. And carrying out one-hot coding on the class mark y to facilitate subsequent training.
In step S2, the neural network is trained: and (3) training different data sets by using different model structures, namely training the CIFAR-10 data set by using a VGG19 model, training the GTSRB data set by using a LeNet-5 model, and training the tiny-ImageNet data set by using a MobileNet V1 model.
Specifically, different data sets are trained using different model structures, wherein the CIFAR-10 data set uses the VGG19 model, the GTSRB data set uses the LeNet-5 model, and the tiny-ImageNet data set uses the MobileNet V1 model. The input layer size of the classifier model is the same as the image size, and is [ H, W, C ], the output layer size is [ H multiplied by W multiplied by C,1], wherein H is the image height, W is the width, and C is the number of input channels.
Inputting the sample x and the corresponding class mark y into a classifier for training, wherein the loss function of the model is defined as:
wherein L is model Represents the loss function of the model, m is the total number of samples used for training, CE (-) represents the cross entropy function, and i represents the index value of the sample. And after the training is finished, saving the model and the training parameters.
The hyper-parameters of the training are set as: the optimizer adopts an Adam optimizer, and the learning rate is set to be 0.0001. For VGG19, epoch is set to 50 and LeNet-5 is set to 20. .
At step S3, a decision score is calculated: for the VGG19 model, the LeNet-5 model and the MobileNet V1 model, specific layers in each model are selected to calculate decision scores of neurons.
In some embodiments, in step S3, the method for selecting a specific layer in each model to calculate the decision score of the neuron includes:
a flatten layer or a global average pooling layer intermediate the convolution and the fully connected layer is selected as the selected layer for decision score calculation.
The specific method for selecting the flatten layer or the global average pooling layer between the convolution layer and the full connection layer as the calculation layer of the decision score comprises the following steps:
selecting a flat layer for the VGG19 model and the LeNet-5 model;
and selecting a global average pooling layer for the MobileNet V1 model.
The method for calculating the decision score comprises the following steps:
wherein,
μ (x) is the decision score;
x is an image sample;
y c representing the confidence of the model prediction class label c for the input x;
Specifically, a particular layer in the model is chosen to compute the decision score of the neuron. In general, the flatten layer or the global average pooling layer, intermediate between the convolution and full-connected layers, will be selected for computation because they contain pixel features and high-dimensional classification features, which are important for the final decision of the model. Specifically, a flatten layer is selected from a VGG19 model and a LeNet-5 model, and a global average potential layer is selected from a MobileNet V1 model.
A decision score is calculated. For sample x, the decision score is defined as:
wherein,
μ (x) is the decision score;
x is an image sample;
y c representing the confidence of the model prediction class label c for the input x;
the decision score reflects the degree of influence of the output of the layer on the final decision of the model. .
The initial benign sample x and its corresponding confrontation sample are input to the model, and its corresponding decision score in the selected layer is calculated and saved.
In step S4, the binary classifier is trained: and respectively building different binary classifiers for the CIFAR-10, GTSRB and tiny-ImageNet data sets, and training the binary classifiers corresponding to the CIFAR-10, GTSRB and tiny-ImageNet data sets by using decision scores calculated by specific layers in the models.
In some embodiments, in the step S4, the method for building different binary classifiers for the CIFAR-10, GTSRB, and tiny-ImageNet datasets respectively includes:
for the CIFAR-10 dataset, the corresponding binary classifier has a 512 full-connection layer, the activation function selects relu, the full-connection layers of 2 neurons are superposed, and the activation function is softmax;
for the GTSRB data set, the corresponding binary classifier structure is 1024 full-join, the activation function selects relu, 512 full-join, the activation function selects relu, and finally a full-join layer of 2 neurons, wherein the activation function is softmax;
for the tiny-ImageNet data set, the corresponding binary classifier structure is 1024 full-link, the activation function selects relu, two identical 512 full-link layers are overlapped, the activation function is relu, and finally the full-link layer with the activation function of softmax is used.
The method for generating the countermeasure samples in the decision scores of the countermeasure samples applied by the binary classifiers corresponding to the CIFAR-10, GTSRB and tiny-ImageNet data sets comprises the following steps:
the countermeasure samples generated by the FGSM are used to obtain a decision score for the generic countermeasure sample.
Specifically, a classifier is built: and constructing a neural network by using a library function in the keras. conv _2d represents a two-dimensional convolutional network, filter _ size represents the size of the convolutional kernel, pool represents the pooling layer, full _ connected represents a fully-connected network, typically placed at the last layer of the model, its active layer typically using the softmax function.
Since the decision score of the selected layer is a simple two-dimensional matrix, classification can be achieved only by a simple fully-connected network. For the CIFAR-10 dataset, the binary classifier structure is a full-connected layer of 512, the activation function selects relu, the full-connected layer of 2 neurons is overlaid, and the activation function is softmax. For the GTSRB dataset, the binary classifier structure is 1024 full-join, the activation function selects relu, 512 full-join, the activation function selects relu, and finally 2 neuron full-join layers, and the activation function is softmax. For the tiny-ImageNet dataset, the structure of the binary classifier is 1024 full-link, the activation function selects relu, two identical 512 full-link layers are overlapped, the activation function is relu, and finally the full-link layer with the activation function softmax is used. As shown in table 1.
TABLE 1
Wherein Dense represents the full connection layer, activation represents the activation function
And (4) training a secondary classifier by using the decision score obtained in the step (3). Specifically, the binary classifier is trained using the countermeasure samples generated by the FGSM to obtain a decision score for the generic countermeasure samples. For the CIFAR-10 dataset, 200 benign samples and 200 FGSM challenge samples were used. For the GTSRB dataset, 400 FGSM challenge samples and 15 benign samples were used. For the tiny-ImageNet dataset, 600 confrontational samples and 6 benign samples were used to generate decision scores to train the binary classifier.
The hyper-parameters of the binary classifier training are set as follows: the cross entropy selects categorical _ cross, the optimizer selects adam, and the batch size is 5. For the CIFAR-10 dataset, epoch is set to 10; the epoch of the GTSRB dataset is 15 and the epoch of the tiny-ImageNet is set to 7.
And respectively storing the binary classifier models with the highest classification accuracy.
In step S5, the binary classifier is optimized for parameters: inputting the decision score of the confrontation sample into a trained binary classifier for testing, and optimizing the binary classifier if the classification precision is insufficient;
and (4) inputting the decision score of the benign sample into a binary classifier, and optimizing the binary classifier if the classification precision is insufficient.
In some embodiments, in step S5, the method for generating the confrontation samples in the decision score of the confrontation samples in the parameter optimization by the binary classifier includes:
the challenge samples generated using BIM and JSMA, and the decision score of the challenge sample for parameter optimization is calculated.
Specifically, the decision scores of the countermeasure samples of the BIM and the JSMA are calculated and input into a trained binary classifier for testing, if the classification accuracy is insufficient, the training epoch number is modified, the number of the countermeasure samples in a training set is increased, or the structure of the binary classifier is increased, and the binary classifier is retrained until the satisfactory classification accuracy is achieved.
And inputting the decision score of the benign sample into a binary classifier, if the classification precision is insufficient, increasing the number of the benign samples in a training set, and retraining the binary classifier, so that the detector can accurately judge the resisting sample and ensure the classification precision of the benign sample.
In conclusion, the scheme provided by the invention can be used for starting from the neuron in the model, calculating the decision score of the model through a small amount of samples, training a simple binary classifier, and realizing high-precision and low-cost detection of the confrontation sample by utilizing the difference of the decision scores of the benign sample and the confrontation sample.
The invention discloses an image confrontation sample detection system based on decision scores in a second aspect. FIG. 4 is a block diagram of an image confrontation sample detection system based on decision scores according to an embodiment of the invention; as shown in fig. 4, the system 100 includes:
a first processing module 101 configured for acquisition and pre-processing of an image data set: acquiring CIFAR-10, GTSRB and tiny-ImageNet data sets, and preprocessing the CIFAR-10, GTSRB and tiny-ImageNet data sets;
a second processing module 102 configured to train the neural network: training different data sets by using different model structures, namely training the CIFAR-10 data set by using a VGG19 model, training the GTSRB data set by using a LeNet-5 model, and training the tiny-ImageNet data set by using a MobileNet V1 model;
a third processing module 103 configured to calculate a decision score: selecting a specific layer in each model for the VGG19 model, the LeNet-5 model and the MobileNet V1 model to calculate the decision score of the neuron;
a fourth processing module 104 configured to train the binary classifier: respectively building different binary classifiers aiming at the CIFAR-10, GTSRB and tiny-ImageNet data sets, and then training the binary classifiers corresponding to the CIFAR-10, GTSRB and tiny-ImageNet data sets by using decision scores calculated by specific layers in each model;
a fifth processing module 105 configured to optimize parameters of the binary classifier: inputting the decision score of the confrontation sample into a trained binary classifier for testing, and optimizing the binary classifier if the classification precision is insufficient;
and (4) inputting the decision score of the benign sample into a binary classifier, and optimizing the binary classifier if the classification precision is insufficient.
According to the system of the second aspect of the present invention, the first processing module 101 is configured to pre-process the CIFAR-10, GTSRB and tiny-ImageNet data sets, including:
for the GTSRB data set, randomly extracting 30% of pictures from each type to serve as a test set, and taking the rest pictures as a training set; for the tiny-ImageNet data set, 11000 pictures are used for training, and 2000 pictures are used for testing the precision; and carrying out one-hot coding on the class mark.
According to the system of the second aspect of the present invention, the third processing module 103 is configured to select a specific layer in each model to calculate a decision score of a neuron, including:
a flatten layer or a global average pooling layer intermediate the convolution and the fully connected layer is selected as the selected layer for decision score calculation.
According to the system of the second aspect of the present invention, the third processing module 103 is configured to specifically include, as the calculation layer of the decision score, selecting a flat layer or a global average pooling layer between the convolution layer and the fully-connected layer:
selecting a flat layer for the VGG19 model and the LeNet-5 model;
and selecting a global average pooling layer for the MobileNet V1 model.
According to the system of the second aspect of the present invention, the third processing module 103 is configured to calculate the decision score by:
wherein,
μ (x) is the decision score;
x is an image sample;
y c representing the confidence of the model prediction class label c for the input x;
According to the system of the second aspect of the present invention, the fourth processing module 104 is configured to, for the CIFAR-10, GTSRB and tiny-ImageNet datasets, respectively building different binary classifiers including:
for the CIFAR-10 dataset, the corresponding binary classifier has a 512 full-connection layer, the activation function selects relu, the full-connection layers of 2 neurons are superposed, and the activation function is softmax;
for the GTSRB data set, the corresponding binary classifier structure is 1024 full-join, the activation function selects relu, 512 full-join, the activation function selects relu, and finally a full-join layer of 2 neurons, wherein the activation function is softmax;
for the tiny-ImageNet dataset, the corresponding binary classifier structure is 1024 full connection, the activation function selects relu, two identical 512 full connection layers are superposed, the activation function is relu, and finally the full connection layer with the activation function softmax is used.
According to the system of the second aspect of the present invention, the fourth processing module 104 is configured to train the generation of the countermeasure sample in the decision score of the countermeasure sample applied by the binary classifier corresponding to the CIFAR-10, GTSRB and tiny-ImageNet datasets, including:
using the countermeasure samples generated by the FGSM to obtain a decision score of a generic countermeasure sample;
according to the system of the second aspect of the present invention, the fifth processing module 105 is configured to generate the confrontation samples in the decision scores of the confrontation samples in the parameter optimization of the binary classifier, including:
the challenge samples generated using BIM and JSMA, and the decision score of the challenge sample for parameter optimization is calculated.
A third aspect of the invention discloses an electronic device. The electronic device comprises a memory and a processor, the memory stores a computer program, and the processor executes the computer program to realize the steps of the image confrontation sample detection method based on the decision score in any one of the first aspect of the disclosure.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 5, the electronic device includes a processor, a memory, a communication interface, a display screen, and an input device, which are connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, Near Field Communication (NFC) or other technologies. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
It will be understood by those skilled in the art that the structure shown in fig. 5 is only a partial block diagram related to the technical solution of the present disclosure, and does not constitute a limitation of the electronic device to which the solution of the present application is applied, and a specific electronic device may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a method for detecting an image countermeasure sample based on a decision score according to any one of the first aspect of the disclosure.
It should be noted that the technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present description should be considered. The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. An image confrontation sample detection method based on decision score, which is characterized by comprising the following steps:
step S1, acquisition and preprocessing of the image dataset: acquiring CIFAR-10, GTSRB and tiny-ImageNet data sets, and preprocessing the CIFAR-10, GTSRB and tiny-ImageNet data sets;
step S2, training the neural network: training different data sets by using different model structures, namely training the CIFAR-10 data set by using a VGG19 model, training the GTSRB data set by using a LeNet-5 model, and training the tiny-ImageNet data set by using a MobileNet V1 model;
step S3, calculating a decision score: selecting specific layers in each model to calculate decision scores of neurons for the VGG19 model, the LeNet-5 model and the MobileNetV1 model;
step S4, training a binary classifier: respectively building different binary classifiers aiming at the CIFAR-10, GTSRB and tiny-ImageNet data sets, and then training the binary classifiers corresponding to the CIFAR-10, GTSRB and tiny-ImageNet data sets by using decision scores calculated by specific layers in each model;
step S5, parameter optimization of the binary classifier: inputting the decision score of the confrontation sample into a trained binary classifier for testing, and optimizing the binary classifier if the classification precision is insufficient;
and inputting the decision score of the benign sample into a binary classifier, and optimizing the binary classifier if the classification precision is insufficient.
2. The method for detecting image countermeasure samples based on decision score as claimed in claim 1, wherein in the step S1, the method for preprocessing the CIFAR-10, GTSRB and tiny-ImageNet data sets comprises:
for the GTSRB data set, randomly extracting 30% of pictures from each type to serve as a test set, and taking the rest pictures as a training set; for the tiny-ImageNet data set, 11000 pictures are used for training, and 2000 pictures are used for testing the precision; and carrying out one-hot coding on the class mark.
3. The method for detecting image countermeasure samples based on decision score as claimed in claim 1, wherein in the step S3, the method for selecting specific layers in each model to calculate the decision score of neuron comprises:
a flatten layer or a global average pooling layer intermediate the convolution and the fully connected layer is selected as the selected layer for decision score calculation.
4. The method for detecting image confrontation samples based on decision score as claimed in claim 3, wherein in the step S3, the specific method for selecting the scatter layer or the global average pooling layer between the convolution layer and the full-connection layer as the calculation layer of decision score includes:
selecting a flat layer for the VGG19 model and the LeNet-5 model;
and selecting a global average pooling layer for the MobileNet V1 model.
5. The method for detecting image countermeasure samples based on decision score as claimed in claim 3, wherein in the step S3, the method for calculating decision score includes:
wherein,
μ (x) is the decision score;
x is an image sample;
y c representing the confidence of the model prediction class label c for the input x;
6. The method for detecting image confrontation samples based on decision score as claimed in claim 1, wherein in the step S4, the method for building different binary classifiers for the CIFAR-10, GTSRB and tiny-ImageNet datasets respectively comprises:
for the CIFAR-10 dataset, the corresponding binary classifier has a 512 full-connection layer, the activation function selects relu, the full-connection layers of 2 neurons are superposed, and the activation function is softmax;
for the GTSRB data set, the corresponding binary classifier structure is 1024 full-join, the activation function selects relu, 512 full-join, the activation function selects relu, and finally a full-join layer of 2 neurons, wherein the activation function is softmax;
for the tiny-ImageNet data set, the corresponding binary classifier structure is 1024 full-link, the activation function selects relu, two identical 512 full-link layers are overlapped, the activation function is relu, and finally the full-link layer with the activation function of softmax is used.
7. The method of claim 1, wherein in the step S4, the method of generating the confrontation sample in the decision score of the confrontation sample for training the binary classifier application corresponding to the CIFAR-10, GTSRB and tiny-ImageNet datasets includes:
using the countermeasure samples generated by the FGSM to obtain a decision score of a generic countermeasure sample;
in step S5, the method for generating the confrontation sample in the decision score of the confrontation sample in the parameter optimization of the binary classifier includes:
the challenge samples generated using BIM and JSMA, and the decision score of the challenge sample for parameter optimization is calculated.
8. An image confrontation sample detection system for decision score based, the system comprising:
a first processing module configured for acquisition and pre-processing of an image dataset: acquiring CIFAR-10, GTSRB and tiny-ImageNet data sets, and preprocessing the CIFAR-10, GTSRB and tiny-ImageNet data sets;
a second processing module configured to train a neural network: training different data sets by using different model structures, namely training the CIFAR-10 data set by using a VGG19 model, training the GTSRB data set by using a LeNet-5 model, and training the tiny-ImageNet data set by using a MobileNet V1 model;
a third processing module configured to compute a decision score: selecting a specific layer in each model for the VGG19 model, the LeNet-5 model and the MobileNet V1 model to calculate the decision score of the neuron;
a fourth processing module configured to train a binary classifier: respectively building different binary classifiers aiming at the CIFAR-10, GTSRB and tiny-ImageNet data sets, and then training the binary classifiers corresponding to the CIFAR-10, GTSRB and tiny-ImageNet data sets by using decision scores calculated by specific layers in each model;
a fifth processing module configured to optimize parameters of the binary classifier: inputting the decision score of the confrontation sample into a trained binary classifier for testing, and optimizing the binary classifier if the classification precision is insufficient;
and (4) inputting the decision score of the benign sample into a binary classifier, and optimizing the binary classifier if the classification precision is insufficient.
9. An electronic device, comprising a memory storing a computer program and a processor, wherein the processor, when executing the computer program, implements the steps of the method for detecting image countermeasure samples based on decision score as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the steps of a method for detecting image countermeasure samples based on decision scores as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210556274.3A CN114841983B (en) | 2022-05-17 | 2022-05-17 | Image countermeasure sample detection method and system based on decision score |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210556274.3A CN114841983B (en) | 2022-05-17 | 2022-05-17 | Image countermeasure sample detection method and system based on decision score |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114841983A true CN114841983A (en) | 2022-08-02 |
CN114841983B CN114841983B (en) | 2022-12-06 |
Family
ID=82572423
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210556274.3A Active CN114841983B (en) | 2022-05-17 | 2022-05-17 | Image countermeasure sample detection method and system based on decision score |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114841983B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110768971A (en) * | 2019-10-16 | 2020-02-07 | 伍军 | Confrontation sample rapid early warning method and system suitable for artificial intelligence system |
CN111767860A (en) * | 2020-06-30 | 2020-10-13 | 阳光学院 | Method and terminal for realizing image recognition through convolutional neural network |
CN111862067A (en) * | 2020-07-28 | 2020-10-30 | 中山佳维电子有限公司 | Welding defect detection method and device, electronic equipment and storage medium |
CN112285667A (en) * | 2020-12-21 | 2021-01-29 | 南京天朗防务科技有限公司 | Neural network-based anti-ground clutter processing method |
CN112673381A (en) * | 2020-11-17 | 2021-04-16 | 华为技术有限公司 | Method and related device for identifying confrontation sample |
CN112907431A (en) * | 2021-02-26 | 2021-06-04 | 中国科学技术大学 | Steganalysis method for resisting steganography robustness |
CN113283599A (en) * | 2021-06-11 | 2021-08-20 | 浙江工业大学 | Anti-attack defense method based on neuron activation rate |
CN113627543A (en) * | 2021-08-13 | 2021-11-09 | 南开大学 | Anti-attack detection method |
CN113642378A (en) * | 2021-05-14 | 2021-11-12 | 浙江工业大学 | Signal countermeasure sample detector design method and system based on N +1 type countermeasure training |
CN114049537A (en) * | 2021-11-19 | 2022-02-15 | 江苏科技大学 | Convergence neural network-based countermeasure sample defense method |
-
2022
- 2022-05-17 CN CN202210556274.3A patent/CN114841983B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110768971A (en) * | 2019-10-16 | 2020-02-07 | 伍军 | Confrontation sample rapid early warning method and system suitable for artificial intelligence system |
CN111767860A (en) * | 2020-06-30 | 2020-10-13 | 阳光学院 | Method and terminal for realizing image recognition through convolutional neural network |
CN111862067A (en) * | 2020-07-28 | 2020-10-30 | 中山佳维电子有限公司 | Welding defect detection method and device, electronic equipment and storage medium |
CN112673381A (en) * | 2020-11-17 | 2021-04-16 | 华为技术有限公司 | Method and related device for identifying confrontation sample |
CN112285667A (en) * | 2020-12-21 | 2021-01-29 | 南京天朗防务科技有限公司 | Neural network-based anti-ground clutter processing method |
CN112907431A (en) * | 2021-02-26 | 2021-06-04 | 中国科学技术大学 | Steganalysis method for resisting steganography robustness |
CN113642378A (en) * | 2021-05-14 | 2021-11-12 | 浙江工业大学 | Signal countermeasure sample detector design method and system based on N +1 type countermeasure training |
CN113283599A (en) * | 2021-06-11 | 2021-08-20 | 浙江工业大学 | Anti-attack defense method based on neuron activation rate |
CN113627543A (en) * | 2021-08-13 | 2021-11-09 | 南开大学 | Anti-attack detection method |
CN114049537A (en) * | 2021-11-19 | 2022-02-15 | 江苏科技大学 | Convergence neural network-based countermeasure sample defense method |
Non-Patent Citations (12)
Also Published As
Publication number | Publication date |
---|---|
CN114841983B (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Adversarial examples for CNN-based SAR image classification: An experience study | |
CN109978893B (en) | Training method, device, equipment and storage medium of image semantic segmentation network | |
Xie et al. | Multilevel cloud detection in remote sensing images based on deep learning | |
CN110991444A (en) | Complex scene-oriented license plate recognition method and device | |
Lei et al. | Boundary extraction constrained siamese network for remote sensing image change detection | |
CN114255403A (en) | Optical remote sensing image data processing method and system based on deep learning | |
CN115205855B (en) | Vehicle target identification method, device and equipment integrating multi-scale semantic information | |
Fan | Research and realization of video target detection system based on deep learning | |
CN115410081A (en) | Multi-scale aggregated cloud and cloud shadow identification method, system, equipment and storage medium | |
Cheng et al. | YOLOv3 Object Detection Algorithm with Feature Pyramid Attention for Remote Sensing Images. | |
Ajaz et al. | Small object detection using deep learning | |
Liu et al. | A multi-scale feature pyramid SAR ship detection network with robust background interference | |
Liang et al. | Adaptive multiple kernel fusion model using spatial-statistical information for high resolution SAR image classification | |
Shen et al. | Infrared object detection method based on DBD-YOLOv8 | |
Zhang et al. | Learning nonlocal quadrature contrast for detection and recognition of infrared rotary-wing UAV targets in complex background | |
Chua et al. | Visual IoT: ultra-low-power processing architectures and implications | |
Wang et al. | Prior-information auxiliary module: an injector to a deep learning bridge detection model | |
Wu et al. | Research on asphalt pavement disease detection based on improved YOLOv5s | |
Wu et al. | WDFA-YOLOX: A Wavelet-Driven and Feature-Enhanced Attention YOLOX Network for Ship Detection in SAR Images | |
Mukherjee et al. | Segmentation of natural images based on super pixel and graph merging | |
CN114841983B (en) | Image countermeasure sample detection method and system based on decision score | |
CN116188956A (en) | Method and related equipment for detecting deep fake face image | |
Wang et al. | FPA-DNN: a forward propagation acceleration based deep neural network for ship detection | |
Chen et al. | HFPNet: Super Feature Aggregation Pyramid Network for Maritime Remote Sensing Small-Object Detection | |
Yang et al. | UAV Landmark Detection Based on Convolutional Neural Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |