CN117152486A - Image countermeasure sample detection method based on interpretability - Google Patents
Image countermeasure sample detection method based on interpretability Download PDFInfo
- Publication number
- CN117152486A CN117152486A CN202310921519.2A CN202310921519A CN117152486A CN 117152486 A CN117152486 A CN 117152486A CN 202310921519 A CN202310921519 A CN 202310921519A CN 117152486 A CN117152486 A CN 117152486A
- Authority
- CN
- China
- Prior art keywords
- image
- noise reduction
- feature map
- disturbance
- robustness
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 12
- 238000010586 diagram Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 230000006835 compression Effects 0.000 claims description 7
- 238000007906 compression Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 abstract description 2
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000000638 solvent extraction Methods 0.000 description 3
- 230000003042 antagnostic effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
An image countermeasure sample detection method based on interpretability belongs to the field of computer vision. Aiming at the problem that the existing image countermeasure sample detection method is abstract due to insufficient interpretation study of the image countermeasure sample, the invention respectively provides an interpretation method of image feature robustness and a study of the image countermeasure sample detection method based on self-adaptive noise reduction. Firstly distinguishing which areas in the image are characterized by robustness and which areas are characterized by non-robustness, then carrying out self-adaptive noise reduction according to the characteristics of different areas, and finally taking the difference of the classification results of the images before and after noise reduction as a measurement training classifier to realize the detection of an image countermeasure sample.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to an image countermeasure sample detection method based on interpretability.
Background
With the development of deep learning technology, deep Neural Networks (DNNs) are widely applied to replace human beings to complete complex redundancy work, and great convenience is brought to people. Image recognition technology based on DNN is widely applied to various fields such as face recognition, automatic driving, biomedicine and the like, and people gradually get used to the convenience brought by the technology and pay more attention to potential safety hazards brought by the technology. Once the image recognition system is attacked, great hidden danger is brought to the property and personal safety of people.
The challenge sample may be attacked against various types of DNN classification networks, such as: speech recognition, natural language processing, and image recognition. The image recognition field is most complex and diversified in resisting sample attack, defending and detecting. The image countermeasure sample only needs to add some elaborate disturbance to the original image, so that the image recognition system can incorrectly classify the picture with a very high confidence. These perturbations may be only a few to a few tens of pixels, which are hardly noticeable to the naked eye, which allows an attacker to attack the image recognition system without being able to detect them.
The identification of traffic signs, the identification of face recognition and the medical imaging system have high requirements on image processing results and precision during automatic driving, and once the systems are attacked to give out wrong prediction classification results, serious threat is brought to our property and even life safety. For example: the autopilot system is countered against sample attacks in identifying traffic signs, identifying the park yield sign as a right turn sign with high confidence, which would have irreparable consequences. Therefore, it is important to ensure the safety and reliability of the image recognition system.
Image countermeasure sample detection methods of the current mainstream are divided into two main categories: a detection method based on statistics and a detection method for constructing an auxiliary model. The statistical-based detection method is used for constructing a classifier by directly calculating the difference of statistical properties of the challenge sample and the clean sample; the detection method for constructing the auxiliary model aims at utilizing the auxiliary model to abstract different characteristics of the antagonistic sample and the clean sample so as to construct the classifier.
The mobility of the challenge sample and the difficult predictability of its characteristics present a high degree of difficulty in the defense and detection efforts of the challenge sample. As for the attack environment, the challenge sample attack can be classified into a black box attack and a white box attack: in the white-box attack scenario, the attacker knows everything about the DNN model, including the model architecture and its weights, and model inputs and outputs; in a black box attack scenario, the attacker is not aware of the model, but can use its transferability to generate challenge samples on existing models and use to attack other unknown models. Furthermore, due to the lack of interpretability studies on challenge samples, researchers are not able to understand why challenge samples can successfully attack DNN models, which also makes challenge samples more difficult to defend or detect.
Disclosure of Invention
Aiming at the problem that the existing image countermeasure sample detection method is abstract due to insufficient interpretation study of the image countermeasure sample, the invention respectively provides an interpretation method of image feature robustness and a study of the image countermeasure sample detection method based on self-adaptive noise reduction. Firstly distinguishing which areas in the image are characterized by robustness and which areas are characterized by non-robustness, then carrying out self-adaptive noise reduction according to the characteristics of different areas, and finally taking the difference of the classification results of the images before and after noise reduction as a measurement training classifier to realize the detection of an image countermeasure sample.
1. An image countermeasure sample detection method based on interpretability is characterized by comprising the following steps:
step 1, acquiring ILSVRC2012 data structure and generating corresponding countermeasure sample
Step 2, adding disturbance to the image classification middle layer feature map and dividing image feature robustness
The robustness fraction r is used as an evaluation index of the image characteristics; x is an original input image sample, in order to obtain depth features of an image, firstly extracting a feature map of an image classification middle layer, dividing the feature map A of the input image x into n x n grid areas, and adding random disturbance delta to each grid area K epsilon K= {1,2,3, n x n } one by one k Obtaining a feature map A after disturbance to a kth grid area k The method comprises the steps of carrying out a first treatment on the surface of the Then A is carried out k Inputting the model back to continue classification, and comparing the results before and after disturbance addition; the value range of the classification result is the probability value [0,1 ]]It is expanded to (- ≡infinity) using a logic function, ++ -infinity):
for all possible classification spaces where i ε R is x, the image after the kth grid of the feature map is perturbed is x k ,For the probability that the original image x is classified as i, each x is predicted for K e k= {1,2,3,..n x n } k Is the most probable class of classification +.>Maximum possible class of image perturbed from original image and feature map ++>And->Calculating Z (x) and Z (x) k );
Z (x) and Z (x) k ) The larger the gap, the more sensitive the region to disturbance delta, i.e. the lower the robustness score, and therefore the robustness score r of the kth grid region is defined k Is of the meter(s)The calculation formula is as follows:
adding a perturb layer after the convolutional layer of the CNN, wherein the added random disturbance is Gaussian noise with a mean value mu=0 and a standard deviation sigma=0.1;
disturbance is added to a feature diagram in the ResNet50 or VGG16 network classification process; for a ResNet50 network, adding a perturbation to the feature map is achieved by adding a perturb layer after its "conv1_conv" convolution layer; the convolution kernel size of the convolution layer is kernel_size= 7*7, the step size is stride=2, padding=3, and a characteristic diagram of 64×112×112 is obtained after convolution; for a VGG16 network, adding a disturbance to the feature map is achieved by adding a perturb layer after its "block2_conv1" convolutional layer; the convolution kernel size of the convolution layer is kernel_size= 3*3, the step size is stride=1, padding=1, and a feature map of 128×112×112 is obtained after convolution;
step 3, performing self-adaptive noise reduction on the images with the well-divided robustness and training a classifier
JPEG compression with different quality factors is adopted as a self-adaptive noise reduction method; the value range of the quality factor Q of JPEG compression is [1,100], the larger Q is, the lower the noise reduction level is, otherwise, the smaller Q is, the higher the noise reduction level is;
from the robustness score r of each grid region calculated in the previous step k ∈R k ={r 1 ,r 2 ,r 3 ,...,r n*n Performing adaptive noise reduction on the image; according to r k Obtaining Q k The calculation formula of (2) is as follows:
respectively inputting the original sample x and the noise-reduced sample x ' into a baseline classifier to obtain a softmax distribution S (x) of x and a softmax distribution S (x ') of x '; the KL divergence D of S (x) and S (x') is used as a measure of the difference before and after sample noise reduction, and the calculation formula of D is as follows:
training the detector to give a threshold τ, τ of 256, classifying the input samples as clean samples when D < τ; classifying the input samples as challenge samples when D.gtoreq.tau.
Drawings
FIG. 1 is a diagram of an image feature partitioning framework;
FIG. 2 is an adaptive noise reduction and classifier frame diagram;
FIG. 3 is a schematic diagram of image feature robustness partitioning;
FIG. 4 is a schematic diagram of adaptive noise reduction and classification;
Detailed Description
The convolutional neural network-based image recognition technology is widely applied to various fields such as face recognition, automatic driving, biomedicine and the like, and people gradually get used to the convenience brought by the technology and pay more attention to potential safety hazards brought by the technology. Once the image recognition system is attacked, great hidden danger is brought to the property and personal safety of people. Therefore, the image contrast sample detection technology based on the interpretability can greatly improve the safety of image recognition, and has wide application prospect.
Traditional computer vision field interpretability algorithms such as CAM, grad-CAM and the like need to modify the original CNN network when visualizing image classification, so that the classification result is more or less deviated. According to the method, an original model is not required to be modified, and the robustness region in image classification can be divided only based on the network classification result after additional disturbance, so that the reduction of the accuracy of the classification result is avoided. In addition, the noise-reduction-based image anti-sample detection technology may have a lower accuracy or a higher false alarm rate according to the magnitude of the noise reduction force. The invention adopts the self-adaptive noise reduction to realize the image countermeasure sample detection method with high detection accuracy and low false alarm rate.
The invention provides an image countermeasure sample detection method based on interpretability, which comprises the following steps: step 1, acquiring ILSVRC2012 data sets and generating corresponding countermeasure samples; step 2, extracting a feature map of an image classification middle layer, adding disturbance, and comparing classification results before and after the disturbance to obtain image robustness feature division; and 3, performing self-adaptive noise reduction on the input image based on the robustness division of the image, classifying the input image before and after noise reduction by using CNN, and taking the difference of classification results as a standard for training a detection classifier. The frame diagram of step 2 is shown in fig. 1, and the frame diagram of step 3 is shown in fig. 2.
Step 1, acquiring ILSVRC2012 data structure and generating corresponding countermeasure sample
Since no challenge sample dataset is disclosed, all current studies on challenge samples require self-generation of challenge samples, the present invention attacks ILSVRC2012 dataset data to generate challenge samples using existing challenge sample attack methods. The current image countermeasure sample generation method is mature, and the image countermeasure sample is generated by adopting an attack method such as FGSM, CW, PGD, deepFool and the like and is used as training and test data.
Step 2, adding disturbance to the image classification middle layer feature map and dividing image feature robustness
Robust score
The robust features of the image are not easily disturbed by the added disturbances, whereas the non-robust features are relatively sensitive to disturbances. Therefore, the present subject needs to study depth features of image classification, grid feature images, and divide robust features and non-robust features of images by adding random noise and comparing classification results.
The invention provides a robustness fraction r as an evaluation index of image characteristics. x is an original input image sample, in order to obtain depth features of an image, firstly extracting a feature map of an image classification middle layer, dividing the feature map A of the input image x into n x n grid areas, and adding random disturbance delta to each grid area K epsilon K= {1,2,3, n x n } one by one k Obtaining a feature map A after disturbance to a kth grid area k . Then A is carried out k And inputting back the model to continue classification, and comparing the results before and after disturbance is added. The value range of the classification result isProbability value 0,1]It can be scaled up to (- + -infinity) using a logic function, ++ -infinity):
for all possible classification spaces where i ε R is x, the image after the kth grid of the feature map is perturbed is x k ,For the probability that the original image x is classified as i, each x is predicted for K e k= {1,2,3,..n x n } k Is the most probable class of classification +.>Maximum possible class of image perturbed from original image and feature map ++>And->Calculating Z (x) and Z (x) k ). From the above formula, Z (x) and Z (x) k ) The larger the gap, the more sensitive the region to disturbance delta, i.e. the lower the robustness score, and therefore the robustness score r of the kth grid region is defined k The calculation formula of (2) is as follows:
the region of the feature map to which the disturbance is added can be mapped back to the original image, and features in the original image are located. A schematic diagram of the image feature robustness partitioning is shown in fig. 3.
Random disturbance and location
In the invention, a perturb layer is added after a convolutional layer of the CNN to achieve the purpose of adding disturbance to the feature map in the classification process. The random disturbance added is gaussian noise with mean μ=0, standard deviation σ=0.1.
Advanced CNNs such as ResNet50 and VGG16 are excellent in image classification, so that the invention adopts the addition of disturbance to feature maps in the ResNet50 and VGG16 network classification process. For the ResNet50 network, adding perturbations to the feature map is achieved by adding a perturb layer after its "conv1_conv" convolution layer. The convolution kernel size of the convolution layer is kernel_size= 7*7, step size=2, padding=3, and a feature map of 64×112×112 is obtained after convolution. For VGG16 networks, adding perturbations to the feature map is achieved by adding a perturb layer after its "block2_conv1" convolutional layer. The convolution kernel size of the convolution layer is kernel_size= 3*3, step size=1, padding=1, and a feature map of 128×112×112 is obtained after convolution.
Step 3, performing self-adaptive noise reduction on the images with the well-divided robustness and training a classifier
Self-adaptive noise reduction method
While high levels of noise reduction may allow more antagonistic samples to be successfully detected, more false positive samples may also occur due to reduced image quality. Therefore, an adaptive noise reduction method is required to be adopted to achieve both the detection success rate and the false positive rate of the detector.
JPEG compression has good effect when used as a noise reduction method for eliminating disturbance, and the task adopts JPEG compression with different quality factors as an adaptive noise reduction method. The quality factor Q of JPEG compression is in the range of [1,100], the larger Q is, the lower the noise reduction level is, and conversely, the smaller Q is, the higher the noise reduction level is.
From the robustness score r of each grid region calculated in the previous step k ∈R k ={r 1 ,r 2 ,r 3 ,...,r n*n And performing adaptive noise reduction on the image. Because the challenge sample attacker tends to add a challenge disturbance to the non-robust features of the image, a low level noise reduction is adopted for the region with higher r, namely a smaller Q value is adopted when noise is reduced; and (3) high-level noise reduction is adopted for the region with lower r, namely, a larger Q value is adopted during noise reduction, so that a noise-reduced image x' is obtained. According to r k Obtaining Q k The calculation formula of (2) is as follows:
training detection classifier
Because the DNN classifier which is trained has certain robustness on noise reduction of normal images, the classification result difference before and after noise reduction of clean samples is smaller, and the difference before and after noise reduction of countermeasure samples is larger. The original sample x and the noise-reduced sample x ' are respectively input into a baseline classifier to obtain the softmax distribution S (x) of x and the softmax distribution S (x ') of x '. The KL divergence D of S (x) and S (x') is used as a measure of the difference before and after sample noise reduction, and the calculation formula of D is as follows:
training the detector to give a threshold τ, τ of 256, classifying the input samples as clean samples when D < τ; classifying the input samples as challenge samples when D.gtoreq.tau. A schematic diagram of adaptive noise reduction and training of the detection classifier is shown in fig. 4.
Claims (1)
1. An image countermeasure sample detection method based on interpretability is characterized by comprising the following steps:
step 1, acquiring ILSVRC2012 data structure and generating corresponding countermeasure sample
Step 2, adding disturbance to the image classification middle layer feature map and dividing image feature robustness
The robustness fraction r is used as an evaluation index of the image characteristics; x is an original input image sample, in order to obtain depth features of an image, firstly extracting a feature map of an image classification middle layer, dividing the feature map A of the input image x into n x n grid areas, and adding random disturbance delta to each grid area K epsilon K= {1,2,3, n x n } one by one k Obtaining a feature map A after disturbance to a kth grid area k The method comprises the steps of carrying out a first treatment on the surface of the Then A is carried out k Input back to the model to continue classification and addComparing the results before and after disturbance; the value range of the classification result is the probability value [0,1 ]]It is expanded to (- ≡infinity) using a logic function, ++ -infinity):
for all possible classification spaces where i ε R is x, the image after the kth grid of the feature map is perturbed is x k ,For the probability that the original image x is classified as i, each x is predicted for K e k= {1,2,3,..n x n } k Is the most probable class of classification +.>Maximum possible class of image perturbed from original image and feature map ++>And->Calculating Z (x) and Z (x) k );
Z (x) and Z (x) k ) The larger the gap, the more sensitive the region to disturbance delta, i.e. the lower the robustness score, and therefore the robustness score r of the kth grid region is defined k The calculation formula of (2) is as follows:
adding a perturb layer after the convolutional layer of the CNN, wherein the added random disturbance is Gaussian noise with a mean value mu=0 and a standard deviation sigma=0.1;
disturbance is added to a feature diagram in the ResNet50 or VGG16 network classification process; for a ResNet50 network, adding a perturbation to the feature map is achieved by adding a perturb layer after its "conv1_conv" convolution layer; the convolution kernel size of the convolution layer is kernel_size= 7*7, the step size is stride=2, padding=3, and a characteristic diagram of 64×112×112 is obtained after convolution; for a VGG16 network, adding a disturbance to the feature map is achieved by adding a perturb layer after its "block2_conv1" convolutional layer; the convolution kernel size of the convolution layer is kernel_size= 3*3, the step size is stride=1, padding=1, and a feature map of 128×112×112 is obtained after convolution;
step 3, performing self-adaptive noise reduction on the images with the well-divided robustness and training a classifier
JPEG compression with different quality factors is adopted as a self-adaptive noise reduction method; the value range of the quality factor Q of JPEG compression is [1,100], the larger Q is, the lower the noise reduction level is, otherwise, the smaller Q is, the higher the noise reduction level is;
from the robustness score r of each grid region calculated in the previous step k ∈R k ={r 1 ,r 2 ,r 3 ,...,r n*n Performing adaptive noise reduction on the image; according to r k Obtaining Q k The calculation formula of (2) is as follows:
respectively inputting the original sample x and the noise-reduced sample x ' into a baseline classifier to obtain a softmax distribution S (x) of x and a softmax distribution S (x ') of x '; the KL divergence D of S (x) and S (x') is used as a measure of the difference before and after sample noise reduction, and the calculation formula of D is as follows:
training the detector to give a threshold τ, τ of 256, classifying the input samples as clean samples when D < τ; classifying the input samples as challenge samples when D.gtoreq.tau.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310921519.2A CN117152486A (en) | 2023-07-26 | 2023-07-26 | Image countermeasure sample detection method based on interpretability |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310921519.2A CN117152486A (en) | 2023-07-26 | 2023-07-26 | Image countermeasure sample detection method based on interpretability |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117152486A true CN117152486A (en) | 2023-12-01 |
Family
ID=88905028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310921519.2A Pending CN117152486A (en) | 2023-07-26 | 2023-07-26 | Image countermeasure sample detection method based on interpretability |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117152486A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117934450A (en) * | 2024-03-13 | 2024-04-26 | 中国人民解放军国防科技大学 | Interpretive method and system for multi-source image data deep learning model |
-
2023
- 2023-07-26 CN CN202310921519.2A patent/CN117152486A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117934450A (en) * | 2024-03-13 | 2024-04-26 | 中国人民解放军国防科技大学 | Interpretive method and system for multi-source image data deep learning model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113554089B (en) | Image classification countermeasure sample defense method and system and data processing terminal | |
Jiang et al. | Semisupervised spectral learning with generative adversarial network for hyperspectral anomaly detection | |
CN108764085B (en) | Crowd counting method based on generation of confrontation network | |
CN111241989A (en) | Image recognition method and device and electronic equipment | |
CN106778687A (en) | Method for viewing points detecting based on local evaluation and global optimization | |
CN112668557B (en) | Method for defending image noise attack in pedestrian re-identification system | |
CN110826056B (en) | Recommended system attack detection method based on attention convolution self-encoder | |
CN110866287A (en) | Point attack method for generating countercheck sample based on weight spectrum | |
JP7136500B2 (en) | Pedestrian Re-identification Method for Random Occlusion Recovery Based on Noise Channel | |
CN113537027B (en) | Face depth counterfeiting detection method and system based on face division | |
CN111783853B (en) | Interpretability-based method for detecting and recovering neural network confrontation sample | |
Khadhraoui et al. | Features selection based on modified PSO algorithm for 2D face recognition | |
Wang et al. | SmsNet: A new deep convolutional neural network model for adversarial example detection | |
Lu et al. | An improved target detection method based on multiscale features fusion | |
CN117152486A (en) | Image countermeasure sample detection method based on interpretability | |
Huang et al. | Human emotion recognition based on face and facial expression detection using deep belief network under complicated backgrounds | |
Saealal et al. | Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance | |
CN114049537A (en) | Convergence neural network-based countermeasure sample defense method | |
Zhao et al. | Infrared small UAV target detection via isolation forest | |
Sun et al. | CAMA: Class activation mapping disruptive attack for deep neural networks | |
CN114693973A (en) | Black box confrontation sample generation method based on Transformer model | |
CN113487506A (en) | Countermeasure sample defense method, device and system based on attention denoising | |
CN113392901A (en) | Confrontation sample detection method based on deep learning model neural pathway activation characteristics | |
CN113378985A (en) | Countermeasure sample detection method and device based on layer-by-layer correlation propagation | |
Patil et al. | Detection of abnormal activity to alert the nearby persons via M-DNN based surveillance system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Wang Xiujuan Inventor after: Li Qipeng Inventor after: Zheng Kangfeng Inventor after: Wang Zhengxiang Inventor before: Wang Xiujuan Inventor before: Zheng Kangfeng Inventor before: Li Qipeng Inventor before: Wang Zhengxiang |