CN117876750A - Deep learning target detection countermeasure sample generation method based on neuron coverage - Google Patents

Deep learning target detection countermeasure sample generation method based on neuron coverage Download PDF

Info

Publication number
CN117876750A
CN117876750A CN202311730923.8A CN202311730923A CN117876750A CN 117876750 A CN117876750 A CN 117876750A CN 202311730923 A CN202311730923 A CN 202311730923A CN 117876750 A CN117876750 A CN 117876750A
Authority
CN
China
Prior art keywords
coverage
neuron
target detection
model
neuron coverage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311730923.8A
Other languages
Chinese (zh)
Inventor
杨志斌
刘渠
王远
周勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202311730923.8A priority Critical patent/CN117876750A/en
Publication of CN117876750A publication Critical patent/CN117876750A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a deep learning target detection countermeasure sample generation method based on neuron coverage, which comprises the following steps: registering a hook function in the target detection model, and pre-training the target detection function to obtain a trained target detection model; constructing a GAN model according to a UEA method; bringing the trained target detection model into the GAN model, and fixing parameters of the trained target detection model; training the discriminators and the generators alternately in turn based on the training set samples and the loss function until the training turns are set, and obtaining generator parameters of the GAN model under the current neuron coverage criterion; the loss function is optimized according to the current neuron coverage criterion; and adjusting neuron coverage criteria and related super parameters, training the GAN model for multiple times to obtain generator parameters of the GAN model optimized under different neuron coverage criteria, and generating an countermeasure sample by using the optimal generator parameters.

Description

Deep learning target detection countermeasure sample generation method based on neuron coverage
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a deep learning target detection countermeasure sample generation method based on neuron coverage.
Background
Object detection (object detection) is the task of detecting objects in digital images and videos with specific semantic information, whose detectable objects depend on a given dataset. Target detection is one of the core problems of computer vision, and has extremely high research and application values in the aspects of automatic driving, video monitoring, scene understanding and the like. Target detection techniques based on deep learning can be divided into two major categories, candidate region-based and regression-based. Compared with the traditional target detection technology based on artificial feature extraction, the detection capability of the target detection technology based on deep learning is remarkably improved.
With the rapid rise of deep learning technology, the safety problem of deep learning models is gradually exposed to the field of view of scientific researchers. The challenge sample (adversarial examples) is one of the most serious security threats it faces, being one of the key research directions for trust AI (Artificial Inteligence). Since the concept of challenge samples was proposed in 2013, the robustness and safety of deep learning-based target detection models was questioned. At the same time, because neural networks have end-to-end unexplainability, and their input-output space (i.e., all possible combinations of inputs and outputs) is too large for exhaustive exploration, it presents a great challenge to the task of target detection challenge sample generation for deep learning under limited computing resources.
Disclosure of Invention
The invention aims to: in order to improve the robustness and safety of a target detection model based on deep learning, the invention provides a neuron coverage-based deep learning target detection countermeasure sample generation method.
The technical scheme is as follows: a deep learning object detection challenge sample generation method based on neuron coverage, comprising the steps of:
step 1: registering a hook function in the target detection model, wherein the hook function is used for acquiring a characteristic diagram of each layer in a characteristic extraction network of the target detection model; pre-training the target detection function to obtain a trained target detection model;
step 2: constructing a GAN model according to the UEA method, so that the setting of a discriminator, a generator and an optimization function in the GAN model is the same as that in the UEA method;
step 3: bringing the trained target detection model into the GAN model constructed in the step 2, and fixing parameters of the trained target detection model;
step 4: training the discriminators and the generators alternately in turn based on the training set samples and the loss function until the training turns are set, and obtaining generator parameters of the GAN model under the current neuron coverage criterion; the loss function is optimized according to the current neuron coverage criterion;
step 5: adjusting neuron coverage criteria and related super parameters, training the GAN model for multiple times to obtain generator parameters of the GAN model optimized under different neuron coverage criteria
Step 6: the challenge samples are generated using the optimal generator parameters.
Further, the object detection module includes:
the feature extraction network is used for extracting features on the input picture and outputting a feature map;
the regional proposal network is used for extracting target candidate regions based on the feature graphs output by the feature extraction network;
the region of interest is pooled for generating determined ROI features for classification and localization.
Further, the registering the hook function in the object detection model specifically includes: and registering the hook function in the feature extraction network of the target detection model.
Further, the loss function is a loss function obtained by optimizing according to the current neuron coverage criterion, and specifically includes:
the loss function includes a function expressed as:
wherein L represents a loss function, L cGAN The GAN loss function is represented as a function of the GAN,represents L 2 Loss function, L DAG Representing an advanced class loss function, L Fea Representing low-level characteristic loss functions, L cov The coverage loss, alpha, beta, epsilon and gamma are represented by weight coefficients;
the coverage loss is obtained by calculating the neuron coverage rate in the forward propagation process of the feature extraction network of the target detection model.
Further, the coverage loss is obtained by calculating the neuron coverage rate in the forward propagation process of the feature extraction network of the target detection model, and specifically comprises the following steps:
using a neuron coverage criterion, a neuron boundary coverage criterion in extended neuron coverage, and a Top-k neuron coverage criterion;
neuron coverage criteria definition: if sign (n) k,i X) = +1, node n k,i Covered by test case x neurons, with N (N k,i X) represents;
neuron boundary coverage criteria definition: if it isNode n k,i Is covered by the example x neuron boundary, with NB (n k,i X) represents;
Top-kneuron coverage criteria definition: if rank (n) k′,i ,x)≤k,(1≤k′≤s k′ ) Node n k′,i Covered by test case xTop-k neurons, with TN k (n k′,i X) represents;
given f.epsilon.N, NB, TN k And a set of neurons in the hidden layer, H (N), the feature extraction network of the object detection model extracts the neuron coverage during forward propagation, expressed as:
in the formula, N, NB, TN k Respectively representing neuron coverage criteria, neuron boundary coverage and Top-k neuron coverage criteria, f (n, x) being a function of determining whether a neuron n is covered by test case x;
coverage loss is expressed as:
L cov (G)=1-M f (N,G(I)) (7)
in the formula, G (I) represents an image after disturbance.
Further, the GAN loss function L cGAN Expressed as:
L cGAN (G,D)=E I [logD(I)]+E I [log(1-D(G(I)))] (2)
wherein G represents a generator, D represents a discriminator, I represents an input image or frame, G (I) represents a perturbed image, E I [·]The mathematical expectation at input I is expressed.
Further, the loss functionExpressed as:
wherein I represents an input image or frame, G (I) represents a perturbed image, E I [·]The mathematical expectation at input I is expressed.
Advancing oneFurther, the high-level classification loss function L DAG Expressed as:
wherein I represents an input image or frame, X is a feature map extracted from a feature extraction network of the object detection model at I, t n An nth target candidate region in the region proposal network which is a target detection model, l n Is t n Is a real tag of the (c) in the (c),error tag randomly sampled from other error classes,/->A classification score vector on the nth target candidate region is represented.
Further, the low-level characteristic loss function L Fea The representation is:
wherein X is m Representing a feature map extracted by an mth layer of a feature extraction network of a target detection model, R m Representing a randomly predefined profile, A m Representing the attention weight calculated from the target candidate region of the region proposed network,representing the Hadmard product.
The beneficial effects are that: compared with the prior art, the invention has the following advantages:
(1) According to the invention, the countermeasure sample is generated by using the generated countermeasure network, and the countermeasure sample can be generated only by forward propagation of the generator, so that the generation efficiency of the countermeasure sample is improved;
(2) The invention uses neuron coverage to guide the generation of the challenge sample, so that the attack success rate of the challenge sample on the target detection model can be improved;
(3) According to the invention, the neurons are covered in the specific subspaces defined under different abstraction levels of the neural network model, so that the diversity as much as possible is explored, and the neuron coverage criterion can help developers to quantify the robustness of the neural network and analyze the internal structure of the neural network; under the direction of appropriate coverage criteria, the developer can re-train and improve the network using the generated antagonism examples, which enable the developer to understand and compare any security-related demonstrations of different networks and effectively divide the input space of the deep-learning-based object detection model;
(4) According to the invention, the generation process of the target detection countermeasure sample is deeply studied under the guidance of neuron coverage, so that the potential threat and security hole of the model can be analyzed and found, and the stability of the model under various extreme inputs can be improved.
Drawings
FIG. 1 is a flowchart of a challenge sample generation method;
FIG. 2 is a block diagram of a Faster-RCNN_VGG16 network;
fig. 3 is a GAN model architecture diagram.
Detailed Description
The embodiment provides a deep learning target detection countermeasure sample generation method based on neuron coverage, which approximately comprises the following steps: registering a hook function in a target model to acquire a characteristic diagram of each layer in a characteristic extraction network, training the target model by using a training set, and storing model parameters; constructing a GAN model according to a UEA method, importing a trained target model, optimizing a loss function according to a neuron coverage criterion, enabling the model to calculate coverage loss of current batch input according to the given coverage criterion in the training process, training a discriminator network and a generator network alternately in turn by using a training set and the optimized loss function until training rounds are set, obtaining the generator model, generating a countermeasure sample by using the obtained generator model, generating the countermeasure sample by using the generated countermeasure network by using the GAN model, converting a traditional optimization mechanism of the countermeasure sample in the target detection field into a generation mechanism, and improving the generation efficiency of the countermeasure sample; generating an countermeasure sample on the test set by using a generator, and evaluating the performance of the countermeasure sample on various target detection models; and training the GAN model for multiple times, adjusting the super parameters related to the coverage criteria, storing generator parameters of the GAN model optimized under different coverage criteria, and completing the target detection countermeasure sample generation task by using the optimal generator.
The method proposed in this embodiment is further described with reference to the accompanying drawings, as shown in fig. 1, and specifically includes the following steps:
step 1: registering a hook function in a feature extraction network of the target model, training the target model by using a training set, and storing model parameters; the specific operation comprises the following steps:
this embodiment uses the fast-RCNN_VGG16 (fast R-CNN based on VGG-16) as the target model, which uses the first 13 convolutional layers of VGG16 and part of the structure of the classifier. The input pictures output on the feature extraction network of the target model feature maps, which are used for subsequent region proposal networks (Region Proposal Network, RPN) to extract target candidate regions and region of interest pooling (roiling), generate determined ROI features for classification and localization. The network architecture of the Faster-RCNN_VGG16 is shown in FIG. 2.
VGG16 is composed of mainly 16 layers, including 13 convolutional layers and 3 fully-connected layers. After the first convolution of 64 convolution kernels, carrying out primary pooling; after the second time of 128 convolution kernels are convolved, pooling is carried out again; after the third time of 256 convolution kernels are convolved, pooling is carried out again; the 512 convolution kernels are repeated twice and three times, and are pooled once, and finally, three full connection is performed.
The target model is a two-stage target detection model and is responsible for an auxiliary training GAN model, and the auxiliary training content comprises the steps of obtaining a characteristic diagram of each layer in a target model characteristic extraction network by using a hook function in the GAN model training process, calculating the neuron coverage rate according to the neuron state, and calculating the high-level class loss and the low-level characteristic loss. The method comprises the following steps: a hook function is registered in a feature extraction network (mainly referred to as the first 13 convolution layers in fig. 2) of the fast-rcnn_vgg16 model, and the hook function is used to acquire feature maps of each layer in the feature extraction network of the target model, so as to calculate neuron coverage and calculate high-level class loss and low-level feature loss according to neuron states.
The fast-rcnn_vgg16 model was trained using a training set of VOCs 2007 in the paspal VOC dataset and 5011 pictures in the validation set. Parameters of network structures introduced from VGG16 in the target model are initialized before training using the pre-training model parameters of VGG16 provided by the torchvision. The model hyper-parameters settings are shown in table 1.
TABLE 1 target model initialization settings
Step 2: constructing a GAN model according to a UEA method, importing a trained target model, optimizing a loss function according to a neuron coverage criterion, and training the GAN model by using a training set; the specific operation comprises the following steps:
the GAN model is constructed according to the UEA method such that the settings of the discriminators, generators and optimization functions in the GAN model are the same as in the UEA method. The GAN model architecture is shown in fig. 3. The generator is responsible for generating the anti-disturbance using the original samples as input, whose structure includes a convolutional layer, a LeakyReLU layer, and a Tanh layer. The arbiter is responsible for distinguishing the generated samples from the real samples, and its structure includes a convolution layer, a LeakyReLU layer, a BatchNorm layer, and a Sigmoid layer.
The network structure of the arbiter and generator is shown in table 2.
Table 2 discriminant and generator network architecture
And (3) importing the target model trained in the step (1) into a GAN model, and fixing parameters of the target model.
When training GAN model in UEA method, the target model is target detection model of two-stage detection such as Faster-RCNN, and its loss function consists of four parts, namely GAN loss and L loss 2 Loss, DAG loss, and multi-scale attention feature loss.
Where α, β, e represent weight coefficients, set to α=0.005, β=1, e= [1×10 ] -4 ,2×10 -4 ]。
The formula for each loss function is as follows:
L cGAN (G,D)=E I [logD(I)]+E I [log(1-D(G(I)))] (2)
wherein G is a generator, D is a discriminator, I is an input image or frame, G (I) represents a perturbed image, E I [·]The mathematical expectation at input I is expressed.
Wherein L is 2 Refers to L 2 Norms.
Wherein L is DAG Is a high-level classification loss, X is a feature map extracted from the Faster-RCNN feature network at I, t n Is the nth build in the Regional Proposal Network (RPN)Conference area, l n Is t n Is a real tag of the (c) in the (c),is an error tag that is randomly sampled from other error classes. />Representing the classification score vector on the nth suggested region (prior to softmax normalization). In the present embodiment, the selection is made from τ= { t 1 ,t 2 ,...,t N N suggested regions having a region suggestion score of 0.7 or more.
Wherein L is Fea Is low-level characteristic loss, X m Representing the feature subgraph extracted at the mth layer of the feature network of the target detector (in the embodiment, the Relu layer after conv3-3 and the Relu layer after conv4-2 in FIG. 2 are selected), R m Is a randomly predefined feature map that is fixed during the training process. A is that m Is the attention weight calculated from the suggested region of the RPN,is the Hadmard product of the two matrices. The loss function of the feature map forces the attention feature map to a random arrangement, thereby better manipulating the feature map of the foreground region.
It is also necessary to optimize the loss function according to the coverage criterion on the basis of the loss function of equation (1). Optimizing the loss function according to the coverage criterion means adding the coverage loss on the basis of the loss function of the original UEA method. Coverage loss is obtained by calculating the neuronal coverage during forward propagation of the target model feature network.
Common neuronal Coverage criteria are neuronal Coverage (Neuron Coverage), extended neuronal Coverage, MC/DC variants, path Coverage, and the like. In this embodiment, neuronal Coverage (Neuron Coverage) and neuronal boundary Coverage (Neuron Boundary Coverage, NBC) and Top-k neuronal Coverage (Top-k Neuron Coverage, TKNC) in extended neuronal Coverage are used, and the three are defined as follows:
neuron coverage definition: if sign (n) k,i X) = +1, node n k,i Covered by test case x neurons, with N (N k,i X) represents.
Neuron boundary coverage definition: if it isNode n k,i Is covered by the example x neuron boundary, with NB (n k,i X) represents.
Top-k neuron coverage criteria definition: if rank (n) k′,i ,x)≤k,(1≤k′≤s k′ ) Node n k′,i Covered by test case xTop-k neurons, with TN k (n k′,i X) represents;
given f.epsilon.N, NB, TN k Set of neurons in the } and hidden layer, H (N), the definition of neuron coverage is as follows:
in the formula, N, NB, TN k Representing neuron coverage criteria, neuron boundary coverage, and Top-k neuron coverage criteria, respectively, f (n, x) is a function that determines whether neuron n is covered by test case x.
From the coverage criteria given, the coverage loss can be defined as:
L cov (G)=1-M f (N,G(I)) (7)
where N represents the state of the neuron calculated from the feature maps of each layer obtained during forward propagation of the hook function.
The loss function of the GAN model can be improved as:
training the GAN model using 5011 pictures in a training set and validation set of VOCs 2007 in the paspal VOC dataset, comprising: training the discriminator network and the generator network alternately in turn based on the training set samples and the optimized loss function until training rounds are set, so that a generator model can be obtained, and the obtained generator model is utilized to generate an countermeasure sample.
The model hyper-parameters settings are shown in table 3.
TABLE 3GAN model initialization settings
Step 3: after training is completed, a generator is used for generating an countermeasure sample on a test set, and the performance of the countermeasure sample on various target detection models is evaluated; the specific operation comprises the following steps:
using 4952 pictures in a test set of VOC 2007 in the PASCAL VOC dataset as input, generating challenge samples using the generator in the GAN model trained in step 2, and then evaluating the performance of the challenge samples on various target detection models (including target model Faster-rcnn_vgg16), and recording for convenient comparison in subsequent step 4. The evaluation index includes attack success rate and attack mobility.
The formula for attack success rate (Attack Success Rate, ASR) is as follows:
wherein, mAP adv Representing the mAP value of the challenge sample in the dataset. mAP (mAP) clean Representing the corresponding raw sample mAP value. ASR has a value between 0 and 1. An accuracy-recall curve (P-R curve) is generated with different confidence thresholds, the area of which is the AP value. mAP is the average of the areas under the curves of all classes P-R.
The attack mobility (TR) is measured using the ratio of attack success rates of the challenge samples on the black box model and the white box model, as follows:
step 4: and training the GAN model for multiple times, adjusting the super parameters related to the coverage criteria, storing generator parameters of the GAN model optimized under different coverage criteria, and completing the target detection countermeasure sample generation task by using the optimal generator. The specific operation comprises the following steps:
the GAN model is trained multiple times, and the hyper-parameters related to the coverage criteria are adjusted, and different neuron coverage criteria have different hyper-parameters to be adjusted, including the activation threshold in the neuron coverage and the k value in the Top-k neuron coverage (the activation threshold is initially set to 0, and the k value is 3). And (3) respectively adjusting different super parameters under different neuron coverage criteria, and judging whether the performance of the model is improved according to the mode of the step (3). Parameters of the optimal generator obtained under different neuron coverage criteria are saved. This embodiment uses three coverage criteria and then three generator parameter files need to be saved. Because the performances of the countermeasure samples generated by all the generators on various target detection models are recorded in the step 3, the step 4 can be performed preferentially by comparing the saved performances of the three generators, and the optimal generator is used for the target detection countermeasure sample generation task.

Claims (9)

1. A deep learning target detection countermeasure sample generation method based on neuron coverage is characterized in that: the method comprises the following steps:
step 1: registering a hook function in the target detection model, wherein the hook function is used for acquiring a characteristic diagram of each layer in a characteristic extraction network of the target detection model; pre-training the target detection function to obtain a trained target detection model;
step 2: constructing a GAN model according to the UEA method, so that the setting of a discriminator, a generator and an optimization function in the GAN model is the same as that in the UEA method;
step 3: bringing the trained target detection model into the GAN model constructed in the step 2, and fixing parameters of the trained target detection model;
step 4: training the discriminators and the generators alternately in turn based on the training set samples and the loss function until the training turns are set, and obtaining generator parameters of the GAN model under the current neuron coverage criterion; the loss function is optimized according to the current neuron coverage criterion;
step 5: adjusting neuron coverage criteria and related super parameters, training the GAN model for multiple times to obtain generator parameters of the GAN model optimized under different neuron coverage criteria
Step 6: the challenge samples are generated using the optimal generator parameters.
2. A deep learning object detection challenge sample generation method based on neuron coverage according to claim 1, characterized in that: the target detection module includes:
the feature extraction network is used for extracting features on the input picture and outputting a feature map;
the regional proposal network is used for extracting target candidate regions based on the feature graphs output by the feature extraction network;
the region of interest is pooled for generating determined ROI features for classification and localization.
3. A deep learning object detection challenge sample generation method based on neuron coverage according to claim 2, characterized in that: the registering the hook function in the target detection model specifically comprises the following steps: and registering the hook function in the feature extraction network of the target detection model.
4. A deep learning object detection challenge sample generation method based on neuron coverage according to claim 1, characterized in that: the loss function is optimized according to the current neuron coverage criterion, and specifically comprises the following steps:
the loss function includes a function expressed as:
wherein L represents a loss function, L cGAN The GAN loss function is represented as a function of the GAN,represents L 2 Loss function, L DAG Representing an advanced class loss function, L Fea Representing low-level characteristic loss functions, L cov The coverage loss, alpha, beta, epsilon and gamma are represented by weight coefficients;
the coverage loss is obtained by calculating the neuron coverage rate in the forward propagation process of the feature extraction network of the target detection model.
5. The neuron coverage-based deep learning object detection challenge sample generation method of claim 4, wherein: the coverage loss is obtained by calculating the neuron coverage rate in the forward propagation process of the feature extraction network of the target detection model, and specifically comprises the following steps:
using a neuron coverage criterion, a neuron boundary coverage criterion in extended neuron coverage, and a Top-k neuron coverage criterion;
neuron coverage criteria definition: if sign (n) k,i X) = +1, node n k,i Covered by test case x neurons, with N (N k,i X) represents;
neuron boundary coverage criteria definition: if it isNode n k,i Is covered by the example x neuron boundary, with NB (n k,i X) represents;
top-k neuron coverage criteria definition: if rank (n) k′,i ,x)≤k,(1≤k′≤s k′ ) Node n k′,i Covered by test case xTop-k neurons, with TN k (n k′,i X) represents;
given f.epsilon.N, NB, TN k And a set of neurons in the hidden layer, H (N), the feature extraction network of the object detection model extracts the neuron coverage during forward propagation, expressed as:
in the formula, N, NB, TN k Respectively representing neuron coverage criteria, neuron boundary coverage and Top-k neuron coverage criteria, f (n, x) being a function of determining whether a neuron n is covered by test case x;
coverage loss is expressed as:
L cov (G)=1-M f (N,G(I)) (7)
in the formula, G (I) represents an image after disturbance.
6. The neuron coverage-based deep learning object detection challenge sample generation method of claim 4, wherein: the GAN loss function L cGAN Expressed as:
L cGAN (G,D)=E I [logD(I)]+E I [log(1-D(G(I)))] (2)
wherein G represents a generator, D represents a discriminator, I represents an input image or frame, G (I) represents a perturbed image, E I [·]The mathematical expectation at input I is expressed.
7. The neuron coverage-based deep learning object detection challenge sample generation method of claim 4, wherein: the loss functionExpressed as:
wherein I represents an input image or frame, G (I) represents a perturbed image, E I [·]The mathematical expectation at input I is expressed.
8. The neuron coverage-based deep learning object detection challenge sample generation method of claim 4, wherein: the high-level classification loss function L DAG Expressed as:
wherein I represents an input image or frame, X is a feature map extracted from a feature extraction network of the object detection model at I, t n An nth target candidate region in the region proposal network which is a target detection model, l n Is t n Is a real tag of the (c) in the (c),error tag randomly sampled from other error classes,/->A classification score vector on the nth target candidate region is represented.
9. The neuron coverage-based deep learning object detection challenge sample generation method of claim 4, wherein: the low-level characteristic loss function L Fea The representation is:
wherein X is m Representing a feature map extracted by an mth layer of a feature extraction network of a target detection model, R m Representing a randomly predefined profile, A m Representing the attention weight calculated from the target candidate region of the region proposed network,representing the Hadmard product.
CN202311730923.8A 2023-12-15 2023-12-15 Deep learning target detection countermeasure sample generation method based on neuron coverage Pending CN117876750A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311730923.8A CN117876750A (en) 2023-12-15 2023-12-15 Deep learning target detection countermeasure sample generation method based on neuron coverage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311730923.8A CN117876750A (en) 2023-12-15 2023-12-15 Deep learning target detection countermeasure sample generation method based on neuron coverage

Publications (1)

Publication Number Publication Date
CN117876750A true CN117876750A (en) 2024-04-12

Family

ID=90587486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311730923.8A Pending CN117876750A (en) 2023-12-15 2023-12-15 Deep learning target detection countermeasure sample generation method based on neuron coverage

Country Status (1)

Country Link
CN (1) CN117876750A (en)

Similar Documents

Publication Publication Date Title
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN111639692B (en) Shadow detection method based on attention mechanism
Wang et al. Detect globally, refine locally: A novel approach to saliency detection
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN108764308B (en) Pedestrian re-identification method based on convolution cycle network
CN106683048B (en) Image super-resolution method and device
CN110163258B (en) Zero sample learning method and system based on semantic attribute attention redistribution mechanism
CN106599797B (en) A kind of infrared face recognition method based on local parallel neural network
US11631248B2 (en) Video watermark identification method and apparatus, device, and storage medium
CN108764085B (en) Crowd counting method based on generation of confrontation network
Thai et al. Image classification using support vector machine and artificial neural network
CN108108751B (en) Scene recognition method based on convolution multi-feature and deep random forest
CN107463920A (en) A kind of face identification method for eliminating partial occlusion thing and influenceing
CN113011357B (en) Depth fake face video positioning method based on space-time fusion
CN112232395B (en) Semi-supervised image classification method for generating countermeasure network based on joint training
CN109447014A (en) A kind of online behavioral value method of video based on binary channels convolutional neural networks
CN114842343A (en) ViT-based aerial image identification method
CN115565019A (en) Single-channel high-resolution SAR image ground object classification method based on deep self-supervision generation countermeasure
CN110111365B (en) Training method and device based on deep learning and target tracking method and device
CN114724218A (en) Video detection method, device, equipment and medium
Saealal et al. Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance
CN112818774A (en) Living body detection method and device
CN116704208A (en) Local interpretable method based on characteristic relation
CN117152486A (en) Image countermeasure sample detection method based on interpretability
CN115830401A (en) Small sample image classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination