CN117523342B - High-mobility countermeasure sample generation method, equipment and medium - Google Patents

High-mobility countermeasure sample generation method, equipment and medium Download PDF

Info

Publication number
CN117523342B
CN117523342B CN202410013633.XA CN202410013633A CN117523342B CN 117523342 B CN117523342 B CN 117523342B CN 202410013633 A CN202410013633 A CN 202410013633A CN 117523342 B CN117523342 B CN 117523342B
Authority
CN
China
Prior art keywords
image
sample
representing
loss function
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410013633.XA
Other languages
Chinese (zh)
Other versions
CN117523342A (en
Inventor
许林峰
陈先意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202410013633.XA priority Critical patent/CN117523342B/en
Publication of CN117523342A publication Critical patent/CN117523342A/en
Application granted granted Critical
Publication of CN117523342B publication Critical patent/CN117523342B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention discloses a high-mobility challenge sample generation method, equipment and medium. The purpose of improving the mobility of the countermeasure sample is achieved by further strengthening the newly generated features while interfering with the original features. Compared with other feature level methods which only interfere with the original features, the method disclosed by the invention has the advantage that the loss function is constructed by aggregating the original feature gradients and newly generated feature gradients. The newly generated features are de-emphasized while disturbing the original features of the image. The migration attack is easier to attack into newly generated characteristic categories when other models are attacked, so that a countermeasure sample with higher migration can be generated.

Description

High-mobility countermeasure sample generation method, equipment and medium
Technical Field
The invention relates to a high-mobility countermeasure sample generation method, equipment and medium, and belongs to the technical field of image processing.
Background
In recent years, with the rapid development of deep neural networks, deep learning has been applied to and made a remarkable progress in a variety of computer image fields such as object detection, image classification, semantic segmentation, and the like. Meanwhile, artificial intelligence security problems are receiving a great deal of attention from researchers due to vulnerability and instability of deep neural networks, which are vulnerable to attacks. Numerous studies have shown that countersamples can be generated by adding some fine perturbations to the original benign sample that do not lead to human alertness, which can be used to mislead the deep learning model to produce erroneous results. For example, in an image recognition scene, a picture that was originally recognized as a cat by an image recognition model is misclassified as a fish after adding a small disturbance that is not noticeable to the human eye. This creates a potential safety hazard for the deep learning model after actual deployment.
The use scenes of the countermeasure sample mainly have two types, and one type is to use the characteristics of the countermeasure sample as a means for checking the classification precision of the deep learning model and the safety of the deep learning model, so that potential safety hazards generated after the actual deployment of the model can be avoided. In another class, in order to cope with attacks and improve model classification accuracy, it is necessary to generate an countermeasure sample with high mobility in advance using an existing image classification model. And training various types of image classification models by using the countermeasure samples, so that the models can correctly classify the countermeasure samples, thereby resisting external attacks. In both of these scenarios, researchers are required to be able to generate more mobile challenge samples.
Currently, there are many methods for generating challenge samples with high mobility, such as: based on the feature level method, the influence of specific features of the local agent model is reduced by disturbing the output of the original image in the middle layer of the network. Further improving the mobility of the challenge sample. For example, feature importance sensing methods (FIA) use the gradient of aggregation to find important features of an image for destruction.
The existing method for generating the contrast sample by the feature level is to generate the contrast sample by disturbing the original target feature of the image. However, the parameters and structures of different models are different, and the characteristics of the interfered original image target also can be different from model to model, so that the migration effect is not ideal. This is because existing feature level approaches focus only on interfering with the original target features of the image, but ignore the impact of new features generated during the process of interfering with the original features on mobility. To further enhance the mobility of challenge samples, those skilled in the art are in need of improvement over existing methods of generating challenge samples. Therefore, the invention proposes to further strengthen the newly generated characteristics while interfering with the original characteristics so as to improve the mobility of the countermeasure sample.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the invention provides a high-mobility challenge sample generation method, equipment and medium, and firstly provides a high-mobility challenge sample generation method based on a reinforced new feature. The purpose of improving the mobility of the countermeasure sample is achieved by further strengthening the newly generated features while interfering with the original features.
The technical scheme is as follows: in order to solve the technical problems, the invention adopts the following technical scheme:
in a first aspect, a high mobility challenge sample generating method includes the steps of:
step 1: to the original imageInput classification model->Obtaining a classification model->First->Feature map output by layer intermediate layer
Step 2: will be originalImage processing apparatusRandom pixel point of (1) is replaced by random noise to obtain random noise disturbance image +.>
Step 3: disturbing an image with random noiseInput classification model->Respectively obtaining the image original characteristic category labels +.>Output of +.>And new feature class tag after feature attack +.>Output of +.>. According to->、/>Gradient counter-propagation to the +.>The interlayer layer obtains the original characteristic gradient of the image>And new generation of feature gradients. Wherein (1)>Class confidence representing the output of the classification model, +.>Representing random disturbance image +.>Input classification model->Back from->Characteristic diagram of layer convolution layer output, +.>Representing the derivative.
Step 4: repeating the steps 2 to 3 until the preset times are reached, and polymerizing the obtained N image original feature gradients to obtainPolymerizing the N new generated feature gradients to obtain +.>. Wherein (1)>Representation pair->The result of (2) is a 2-norm value. />Representation pair->The result of (2) is a 2-norm value.
Step 5: construction of a loss function. Wherein (1)>Representing the product of the corresponding points, +.>Representing the influencing factors->Representing the challenge sample to be determined,>the representation will->Input classification model->Back from->And (5) a characteristic diagram of the layer convolution layer output.
Step 6: according to the loss functionAnd constructing an optimized loss function model, and solving the optimized loss function model to obtain a final countermeasure sample.
Preferably, the optimizing loss function model specifically includes:
wherein,representing a loss function->Minimum +.>
Representation->Is at->Original image is modified in scope->Is a derived challenge sample of the pixel values of (a).Represents infinite norm>Representing the super parameter.
As a preferred solution, the solving the optimized loss function model to obtain a final countermeasure sample specifically includes:
step 6.1: obtaining Newton acceleration samples of the jth roundThe calculation formula is as follows:
wherein: when j is initialized to 0, gradient,/>Is the original image. />Represents the gradient of the j-th round,>newton acceleration sample representing the j-th round, +.>Represents the j-th wheel pairAnti-sample, ->Indicating newton's acceleration control factor.
Step 6.2: will beInput classification model->Back from->Characteristic diagram of layer convolution layer output +.>
Step 6.3: will beSubstitution of the loss function->Obtain->
Step 6.4: will beBack-propagation from intermediate to input layer resulting in gradient->
Step 6.5: according to gradientCalculate->The calculation formula is as follows:
wherein:
representing 1-norm arithmetic,/-norm arithmetic,>representing the gradient accumulation control factor.
Step 6.6: according to、/>Calculate the challenge sample for round j+1 +.>The calculation formula is as follows:
representing the step size of the iterative attack. />Representing clipping the element values. Wherein (1)>The calculation formula is as follows:
step 6.7: repeating the iteration steps 6.1-6.6, and judging whether the iteration times reach the preset times. If so, a final challenge sample is generated. If not, returning to the step 6.1.
Preferably, the noise added in step 2 and the random pixel point selected in the original image are different in each repetition of step 4.
In a second aspect, a computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements a high mobility challenge sample generation method according to any of the first aspects.
In a third aspect, a computer device comprises:
and the memory is used for storing the instructions.
A processor for executing the instructions to cause the computer device to perform the operations of a high mobility challenge sample generation method according to any of the first aspects.
The beneficial effects are that: compared with other feature level methods which only interfere with original features, the high-mobility countermeasure sample generation method, equipment and medium provided by the invention have the advantage that the loss function constructed by aggregating the original feature gradient and the newly generated feature gradient is realized. The newly generated features are de-emphasized while disturbing the original features of the image. The migration attack is easier to attack into newly generated characteristic categories when other models are attacked, so that a countermeasure sample with higher migration can be generated.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully by reference to the accompanying drawings, in which embodiments of the invention are shown, and in which it is evident that the embodiments shown are only some, but not all embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present invention.
The invention will be further described with reference to specific examples.
Example 1:
the embodiment describes a high-mobility challenge sample generation method, which includes the following steps:
step 1: to the original imageInput classification model->Obtaining a classification model->First->Feature map output by layer intermediate layer
In one embodiment, the classification modelRefers to an image classification model, such as a VGG model, for the current main stream.
The middle layer refers to a classification modelA convolution layer before the full connection layer.
First, theThe layer means->And a plurality of convolution layers.
Feature mapRefer to the original image +.>From the classification model->Input port input is followed by->And the content output after the convolution layers.
Step 2: to the original imageRandom pixel point of (1) is replaced by random noise to obtain random noise disturbance image +.>
One embodiment is by randomly selecting a proportion of the pixels of the original image and then replacing them with random noise perturbations on those pixels.
Step 3: disturbing an image with random noiseInput classification model->Respectively obtaining the image original characteristic category labels +.>Output of +.>And new feature class tag after feature attack +.>Output of +.>And respectively carrying out gradient back propagation to the +.>The interlayer layer obtains the original characteristic gradient of the image>And new generation of characteristic gradients->
In one embodiment, the inventionClassification model trained by explicitly-adopted ImageNet dataset. There are 1000 categories in total for the ImageNet dataset.
Wherein,is the confidence of all categories output by the classification model.
The label is a category marked after the original image is manually identified and is taken as the true category of the image. />Is a random noise disturbance image->Input classification model->Back->Confidence of category label output.
The label is a countercheck sample generated by the original image through the existing feature level method, and the countercheck sample is classified again on the original classification model to obtain error result category +.>。/>Then it is a random noise disturbance image +.>Input classification model->Back->Confidence of category label output.
Counter-propagating the two classes of output results, respectively, but only to the mentioned first in step oneAnd (3) a layer interlayer. The mathematical expression of the process is +.>And->
Wherein:representing random disturbance image +.>Input classification model->Back from->And (5) a characteristic diagram of the layer convolution layer output.
Representation pair->Tag output results, use->And (5) conducting derivation.
Representation pair->Tag output results, use->And (5) conducting derivation.
Step 4: repeating the steps 2 to 3 until the preset times are reached, and polymerizing the obtained N image original feature gradients to obtainPolymerizing the N new generated feature gradients to obtain +.>
Repeating the operations of step 2 and step 3N times:
the noise added in each repetition of step 2 is different from the random pixel point selected in the original image.
Thus, each time obtained in step 3And->And also different.
Representing the +.>And (5) secondary operation.
Then respectively accumulating the obtained class gradients to obtainAnd->
Wherein,indicate use of->The results obtained by the convolution blocks.
Representation pair->The result of (2) is a 2-norm value.
Representation pair->The result of (2) is a 2-norm value.
Step 5: constructing a loss function by multiplying the obtained feature map by the difference between the two aggregation gradients
In one embodiment, the step is obtained by processing in the above stepsAnd->The data is used to construct a loss function. For the subsequent passage of optimizing the loss function +.>An challenge sample is generated in preparation.
Wherein:representing the product of the corresponding points. />Representing the impact factor, for adjusting the new feature.
Will beCarry-in loss function->The result is->
Wherein,representing alteration of the original image->The resulting challenge samples of pixel values of (a) are used as variables to be determined.
The representation will->Input Classification model->Back from->And (5) a characteristic diagram of the layer convolution layer output.
Step 6: iteratively generating a challenge sample using an optimized loss function model: loss function constructed by the last stepThe challenge sample generation problem can be converted into an optimization problem, and then solved by utilizing a Newton iteration method, wherein a definition formula is as follows:
wherein,indicating a loss functionCount->Minimum +.>
Representation->Is at->Original image is modified in scope->Is obtained by comparing the values of the pixels of (1) with the values of the corresponding samples of the pixels of (2) within the range of +.>Carry-in loss function->Is the minimum value.
Represents infinite norm>Representing the hyper-parameters used to control the magnitude of the disturbance.
In one embodiment, the optimized loss function model is optimized and solved by utilizing Newton momentum accumulation (NI) method to obtain the final productThe specific process is as follows:
step 6.1: calculating a local optimal solution easy to jump out by using a Newton momentum acceleration method, and obtaining Newton acceleration samples of the jth roundThe calculation formula is as follows:
wherein: j represents the jth round of iteration. When the initialization setting j is 0, the gradient,/>Is an original image;representing the gradient of the j-th round; />Newton acceleration sample representing the j-th round, +.>Challenge sample representing the jth round, +.>Indicating newton's acceleration control factor.
Step 6.2: will beInput classification model->Back from->Characteristic diagram of layer convolution layer output +.>
Step 6.3: will beSubstitution of the loss function->Obtain->
Step 6.4: will beBack-propagation from intermediate to input layer resulting in gradient->
Step 6.5: according to gradientCalculate->The calculation formula is as follows:
wherein:
representing 1-norm arithmetic,/-norm arithmetic,>representing the gradient accumulation control factor.
Step 6.6: adding disturbances to the image generated in the previous roundCutting the sample and obtaining newly generated countermeasure sample of j+1 rounds +.>The formula is as follows:
representing the step length of iterative attack; />Representing clipping element values to the extent that they can be +.>Between them.
Wherein,the calculation formula is as follows:
step 6.7: repeating the iteration steps 6.1-6.6, and judging whether the iteration times reach the preset times or not; if so, generating a final challenge sample; if not, returning to the step 6.1.
Example 2:
this embodiment describes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a high-mobility challenge sample generation method as described in any of embodiment 1.
Example 3:
the present embodiment introduces a computer apparatus including:
and the memory is used for storing the instructions.
A processor configured to execute the instructions to cause the computer device to perform the operations of a high mobility challenge sample generation method as described in any of embodiment 1.
Example 4:
a method for generating a high mobility challenge sample will be described in detail with reference to fig. 1:
in the present embodiment use is made ofRepresenting an image classification model when the classification model inputs a clean original image +.>When it is possible to obtain the probability output +.>
The purpose of the invention is to provide a method for producing an original image by performing a process of generating an original imageAdding imperceptible disturbance to generate an challenge sample +.>And enabling the image classification model to generate a misclassification result. The challenge sample process may be defined as follows:
wherein,representing the original image +.>Is a countermeasure sample, is->Representing an image classification model->Is an image classification model +.>Parameter of->Representation->And->Between->Norm distance,/->Is a super parameter for controlling the magnitude of the disturbance. The invention can also successfully mislead the decision of other target models through the countermeasure sample generated by the home agent original model, thereby realizing the mobility of the generated countermeasure sample.
In an embodiment, an embodiment of the present invention provides a method for generating a challenge sample with high mobility, including:
step 1: to the original imageInput classification model->Obtaining a classification model->First->Feature map output by layer intermediate layer
Step 2: for the original imageRandomly discarding pixel points, and adding random noise with normal distribution of random 0-1 to obtain +.>The formula is as follows:
wherein:is a +.>The same size matrix, which contains both 0 and 1 types of values, will be reserved for the pixels at the 1 position and discarded for the 0 position. />Representing the probability of containing 0. />Representation pair->Element negation operations. />Representing size image +.>The same random noise matrix. />Representing the product of corresponding points>The probability of discarding pixels is +.>
Step 3: disturbing the image by random noise in the previous stepInput classification model->Respectively obtaining the image original characteristic category labels +.>Output of +.>And new feature class tag after feature attack +.>Output of +.>And respectively carrying out gradient back propagation to the +.>The interlayer layer obtains the original characteristic gradient of the image>And new generation of feature gradients
The label is a category marked after the original image is manually identified and is taken as the true category of the image. />Is a random noise disturbance image->Input classification model->Back->Confidence of category label output.
The label is a contrast sample generated by the original image through the prior feature level method, and the contrast sample is subjected to in-situ separationError result category +.>。/>Then it is a random noise disturbance image +.>Input classification model->Back->Confidence of category label output. The formula process is expressed as follows:
wherein:representing an existing feature level attack approach. Original image +.>Using the existing feature level attack method, we get a challenge sample +.>. Feeding the antigen sample into an original image classification model->The category label->
Step 4: the differences in parameters and network structure between the different classification models result in differences in characteristics between the classification models. The characteristics of the classification model are carried when the image class features are acquired, so that the mobility of the challenge sample is poor. For this purpose, the image is transformed a plurality of timesThe semantic features of the images are reserved, and feature gradients obtained by the transformed images are used for aggregation. The gradient after aggregation weakens the carried characteristics of the original classification model, so that the mobility of the antagonistic sample is improved. By repeating the second and third steps until the set number of times N, N are obtainedClass feature gradient->And N->Class feature gradient->. Polymerization operations were performed separately, and the polymerization gradient was calculated by the following formula:
step 5: the present invention implements the following loss function to guide the generation of challenge samples
Step 6: iteratively generating a challenge sample using an optimized loss function model: loss function constructed by the last stepThe challenge sample generation problem can be converted into an optimization problem, and then solved by utilizing a Newton iteration method, wherein a definition formula is as follows:
wherein,representing a loss function->Minimum +.>
Representation->Is at->Original image is modified in scope->Is obtained by comparing the values of the pixels of (1) with the values of the corresponding samples of the pixels of (2) within the range of +.>Carry-in loss function->Is the minimum value.
Represents infinite norm>Representing the hyper-parameters used to control the magnitude of the disturbance.
In one embodiment, the optimized loss function model is optimized and solved by utilizing Newton momentum accumulation (NI) method to obtain the final product
Example 5:
to evaluate the effectiveness of the present method in generating challenge samples with high mobility, the method of the present example generated challenge samples that compared the FIA (Feature Importance-awave Attack), RPA (Random Patch Attack), and NAA (Neuron Attribution-based Attack) existing feature level methods. The method of the present invention is called the outer.
The performance of the attack is evaluated here with 5 classification models as target models, where: the 4 classification models after normal training are respectively:
vgg-16 (Visual Geometry Group-16), res-152 (152-layer neural network trained using a Res net (deep residual network) Unit), inc-v3 (convolution neural network model under the concept-v 3 google flag), inc-v4 (convolution neural network model under the concept-v 4 google flag).
1 defense classification model, inc-v3-adv (recommendation-v 3-adv google flag-based convolutional neural network model) through challenge training.
Four local master generation challenge samples were selected here, inc-v3, inc-v4, res-152, vgg-16, respectively.
The parameters for the challenge sample generation method are set as follows:
the middle layer is set to layer 3 for the classification model.
For the FIA parameter setting, the aggregation number N is set to 30, and regarding the discard probability p, p=0.3 when the classification model trained normally is attacked, and p=0.1 when the classification model is attacked.
For RPA parameter settings, the aggregation number N is set to 60 and the pixel modification probabilities pm are set to 0.3, respectively.
For NAA parameter settings, the aggregation number N was set to 30 and the forward characteristic influencing factor was set as follows.
The outer parameters are set as follows, and the result after the RPA attack is taken as a new feature class label t. The random disturbance pixel probability p is set to 0.3, indicating that the newton's cumulative control factor is set to 1.0.
All challenge sample generation methods set the maximum perturbation to 16, the number of iterations to t=10, and the step size. The attenuation factor is set for all methods.
As shown in table 1, in the table, the first column represents the original model used to generate the challenge sample, and the table data represents the attack success rate corresponding to the other model of the challenge sample migration attack generated using the original model. * The method represents the attack success rate of the original model generated challenge sample in the original model, and other data are the black box attack success rate of the corresponding model. The migration attack success rate represents the proportion of the image to be subjected to error classification by the attacked model under the corresponding generation model, and the higher the proportion is, the better the attack performance is represented. The best migration results in each term are highlighted in bold.
The result shows that the highest success rate of each item is the attack method provided by the invention. In addition, compared with the optimal results of other reference methods, the overall attack success rate is improved by more than 2.0%.
Experimental results show that for the reference method and the attack method provided by the invention, the generated challenge sample mobility can be improved to the greatest extent by strengthening the new generation characteristic strategy.
Table 1 comparison of migration attack success rates
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (5)

1. A method of high mobility challenge sample generation, characterized by: the method comprises the following steps:
step 1: inputting the original image x into a classification model f, and obtaining a feature map f output by a kth layer middle layer of the classification model f k (x);
Step 2: replacing random pixel points of the original image x with random noise to obtain a random noise disturbance image
Step 3: disturbing an image with random noiseIn the input classification model f, the output of the image original characteristic class label r is obtained respectively>And output of newly generated feature class tag t after feature attack +.>According to->Respectively carrying out gradient back propagation to the k-th intermediate layer to obtain the original characteristic gradient of the image>And new generation of feature gradientsWhere l () represents the class confidence of the classification model output,/->Representing random disturbance image +.>Feature map output from the kth layer convolution layer after input of classification model f, ++>Representing derivative;
step 4: repeating the steps 2 to 3 until the preset times are reached, and gathering the original characteristic gradients of the obtained N imagesIs combined to obtainPolymerizing the N new generated feature gradients to obtain +.>Wherein C is 1 Representation pair->The result of (2) is a 2-norm value; c (C) 2 Representation pair->The result of (2) is a 2-norm value;
step 5: construction of a loss functionWherein, as the product of the corresponding points, beta represents the influence factor, x ' represents the challenge sample to be determined, fk (x ') represents the feature map output from the kth layer convolution layer after inputting x ' into the classification model f;
step 6: constructing an optimized loss function model according to the loss function L (x'), and solving the optimized loss function model to obtain a final countermeasure sample;
the optimizing loss function model specifically comprises the following steps:
s.t.||x′-x|| ≤ε
wherein,x 'representing the minimum value of the loss function L (x');
||x′-x|| ε represents that x' is a challenge sample obtained by modifying the value of the pixel point of the original image x within ε; I.I Represents an infinite norm and epsilon represents a hyper-parameter.
2. A high mobility challenge sample generation method according to claim 1, wherein: solving the optimized loss function model to obtain a final countermeasure sample, wherein the method specifically comprises the following steps:
step 6.1: obtaining Newton acceleration sample y of jth round j The calculation formula is as follows:
y j =x′ j +γ·gra j
wherein: gradient gra when j is initialized to 0 0 =0,x′ 0 =x is the original image; y is j Newton acceleration samples representing the jth round, x' j Representing the challenge sample of the j-th round, and gamma represents the Newton acceleration control factor; gra j Representing the gradient of the j-th round;
step 6.2: will y j Feature map f output from the kth layer convolution layer after input of classification model f k (y j );
Step 6.3: will f k (y j ) Substituting the loss function L (x') to obtain L (y) j );
Step 6.4: will L (y) j ) Back propagation from intermediate to input layer yields a gradient ∈ x L(y j );
Step 6.5: according to gradient x L(y j ) Calculation gra j+1 The calculation formula is as follows:
gra j+1 =μ·gra j +g j
wherein:
||·|| 1 represents 1-norm operation, μ represents gradient accumulation control factor;
step 6.6: according to gra j+1 、x′ j Calculate the challenge sample x 'for round j+1' j+1 The calculation formula is as follows:
alpha represents the step size of the iterative attack;
representing clipping element values;
wherein sign (gra) j+1 ) The representation is:
step 6.7: repeating the iteration steps 6.1-6.6, and judging whether the iteration times reach the preset times or not; if so, generating a final challenge sample; if not, returning to the step 6.1.
3. A high mobility challenge sample generation method according to claim 1, wherein: the noise added in the step 2 and the random pixel point selected in the original image are different in each repetition of the step 4.
4. A computer-readable storage medium, characterized by: a computer program stored thereon, which, when executed by a processor, implements a high mobility challenge sample generating method according to any of claims 1-3.
5. A computer device, characterized by: comprising the following steps:
a memory for storing instructions;
a processor for executing the instructions to cause the computer device to perform the operations of a high mobility challenge sample generation method as claimed in any of claims 1-3.
CN202410013633.XA 2024-01-04 2024-01-04 High-mobility countermeasure sample generation method, equipment and medium Active CN117523342B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410013633.XA CN117523342B (en) 2024-01-04 2024-01-04 High-mobility countermeasure sample generation method, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410013633.XA CN117523342B (en) 2024-01-04 2024-01-04 High-mobility countermeasure sample generation method, equipment and medium

Publications (2)

Publication Number Publication Date
CN117523342A CN117523342A (en) 2024-02-06
CN117523342B true CN117523342B (en) 2024-04-16

Family

ID=89751699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410013633.XA Active CN117523342B (en) 2024-01-04 2024-01-04 High-mobility countermeasure sample generation method, equipment and medium

Country Status (1)

Country Link
CN (1) CN117523342B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110045335A (en) * 2019-03-01 2019-07-23 合肥工业大学 Based on the Radar Target Track recognition methods and device for generating confrontation network
CN111325324A (en) * 2020-02-20 2020-06-23 浙江科技学院 Deep learning confrontation sample generation method based on second-order method
CN111461307A (en) * 2020-04-02 2020-07-28 武汉大学 General disturbance generation method based on generation countermeasure network
CN111652290A (en) * 2020-05-15 2020-09-11 深圳前海微众银行股份有限公司 Detection method and device for confrontation sample
CN114283341A (en) * 2022-03-04 2022-04-05 西南石油大学 High-transferability confrontation sample generation method, system and terminal
CN114842242A (en) * 2022-04-11 2022-08-02 上海大学 Robust countermeasure sample generation method based on generative model
CN115115905A (en) * 2022-06-13 2022-09-27 苏州大学 High-mobility image countermeasure sample generation method based on generation model
CN116011558A (en) * 2023-01-31 2023-04-25 南京航空航天大学 High-mobility countermeasure sample generation method and system
CN116993893A (en) * 2023-09-26 2023-11-03 南京信息工程大学 Method and device for generating antagonism map for resisting AI self-aiming cheating

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220261626A1 (en) * 2021-02-08 2022-08-18 International Business Machines Corporation Distributed Adversarial Training for Robust Deep Neural Networks
CN113554089B (en) * 2021-07-22 2023-04-18 西安电子科技大学 Image classification countermeasure sample defense method and system and data processing terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110045335A (en) * 2019-03-01 2019-07-23 合肥工业大学 Based on the Radar Target Track recognition methods and device for generating confrontation network
CN111325324A (en) * 2020-02-20 2020-06-23 浙江科技学院 Deep learning confrontation sample generation method based on second-order method
CN111461307A (en) * 2020-04-02 2020-07-28 武汉大学 General disturbance generation method based on generation countermeasure network
CN111652290A (en) * 2020-05-15 2020-09-11 深圳前海微众银行股份有限公司 Detection method and device for confrontation sample
CN114283341A (en) * 2022-03-04 2022-04-05 西南石油大学 High-transferability confrontation sample generation method, system and terminal
CN114842242A (en) * 2022-04-11 2022-08-02 上海大学 Robust countermeasure sample generation method based on generative model
CN115115905A (en) * 2022-06-13 2022-09-27 苏州大学 High-mobility image countermeasure sample generation method based on generation model
CN116011558A (en) * 2023-01-31 2023-04-25 南京航空航天大学 High-mobility countermeasure sample generation method and system
CN116993893A (en) * 2023-09-26 2023-11-03 南京信息工程大学 Method and device for generating antagonism map for resisting AI self-aiming cheating

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Adversarial Attack and Defense: A Survey;Anirban Chakraborty 等;《arXiv:1810.00069》;20180928;1-31 *
Interpreting Adversarial Examples in Deep Learning;Sicong Han 等;《ACM Computing Surveys》;20230717;第55卷;1-38 *
Towards Transferable Targeted Adversarial Examples;Zhibo Wang 等;《IEEE》;20231231;20534-20543 *
深度神经网络中的对抗样本攻防技术研究;张树栋;《万方数据知识服务平台》;20230504;第2-4章 *
针对车牌识别系统的双重对抗攻击;陈先意 等;《网络与信息安全学报》;20230630;第9卷(第3期);16-27 *

Also Published As

Publication number Publication date
CN117523342A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
Liang et al. Detecting adversarial image examples in deep neural networks with adaptive noise reduction
Silva et al. Opportunities and challenges in deep learning adversarial robustness: A survey
Hui et al. Linguistic structure guided context modeling for referring image segmentation
Kang et al. Shakeout: A new approach to regularized deep neural network training
CN110334742B (en) Graph confrontation sample generation method based on reinforcement learning and used for document classification and adding false nodes
CN111753881B (en) Concept sensitivity-based quantitative recognition defending method against attacks
CN110827330B (en) Time sequence integrated multispectral remote sensing image change detection method and system
CN111242166A (en) Universal countermeasure disturbance generation method
CN113283590A (en) Defense method for backdoor attack
CN113919497A (en) Attack and defense method based on feature manipulation for continuous learning ability system
CN115456043A (en) Classification model processing method, intent recognition method, device and computer equipment
CN113742723A (en) Detecting malware using deep generative models
WO2021083731A1 (en) System and method with a robust deep generative model
Avraham et al. Parallel optimal transport gan
Tan Visualizing global explanations of point cloud dnns
CN117523342B (en) High-mobility countermeasure sample generation method, equipment and medium
Naqvi et al. Adversarial attacks on visual objects using the fast gradient sign method
CN115719085B (en) Deep neural network model inversion attack defense method and device
JP2023118101A (en) Device and method for determining adversarial patch for machine learning system
Dai et al. A targeted universal attack on graph convolutional network
Ham et al. P-pseudolabel: enhanced pseudo-labeling framework with network pruning in semi-supervised learning
CN115758337A (en) Back door real-time monitoring method based on timing diagram convolutional network, electronic equipment and medium
Yu et al. Adversarial samples generation based on rmsprop
Wei et al. Learning and exploiting interclass visual correlations for medical image classification
Dhar et al. Detecting deepfake images using deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant