CN114741310A - Transferable image confrontation sample generation and deep neural network testing method and system - Google Patents

Transferable image confrontation sample generation and deep neural network testing method and system Download PDF

Info

Publication number
CN114741310A
CN114741310A CN202210444185.XA CN202210444185A CN114741310A CN 114741310 A CN114741310 A CN 114741310A CN 202210444185 A CN202210444185 A CN 202210444185A CN 114741310 A CN114741310 A CN 114741310A
Authority
CN
China
Prior art keywords
image
sample
disturbance
model
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210444185.XA
Other languages
Chinese (zh)
Inventor
张鹏程
任彬
吉顺慧
蔡涵博
肖明轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202210444185.XA priority Critical patent/CN114741310A/en
Publication of CN114741310A publication Critical patent/CN114741310A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for generation of a transferable image confrontation sample and testing of a deep neural network, and aims to test the robustness of a deep neural network model in a real environment. Firstly, collecting data, and training to obtain an agent model with the same function as the model to be tested. A diversified neighbor sample of the input image sample is then generated. And obtaining the gradient of the loss function relative to the adjacent image sample on the proxy model, carrying out standardization operation on the gradient, and taking the average of the gradient as a disturbance direction. And finally, clipping the pixel values which do not meet the constraint condition according to the dynamic constraint. And repeating the steps for the maximum iteration times to obtain transferable image antagonistic samples, and testing the model to be tested by using the samples. According to the method, the generation method of the samples is innovated through diversification, gradient standardization and dynamic constraint generation of the samples of the adjacent images, and the quality of the generated images is improved while the success rate of generation of the anti-samples of the transferable images is improved.

Description

Transferable image confrontation sample generation and deep neural network testing method and system
Technical Field
The invention relates to a testing method and a testing system for transferable image confrontation sample generation and a deep neural network model, and belongs to the field of artificial intelligence testing.
Background
Although deep neural networks have achieved good results in many computer tasks and practical applications, recent studies have shown that DNNs for image processing tasks are particularly vulnerable to image countermeasures (adaptive samples) with the addition of minor perturbations that are imperceptible to humans but can cause serious errors in AI systems, which brings a safety hazard to the actual deployment of DNNs.
At present, the DNNs safety is mainly tested whether the deep neural network model can be correctly predicted on the generated image samples by generating image confrontation samples of the model to be tested. Although the image countersample exists widely, the threat of the image countersample generated by the white-box attack is not urgent, because the white-box attack needs enough internal information of the Target Model (the Model to be tested), and an attacker cannot obtain the structure, parameters and other information of the Target Model (Target Model) in a real application scene. The other type is black box attack, which can be roughly divided into two types, one type is based on query and the other type is based on migration. Query-based attacks typically require multiple visits to DNNs to obtain more accurate gradient information, require significant computational expense and are easily detected by the system. The migration-based attacks are different, and DNNs models with the same task capability are used as proxy models (Surrogate models), image countersamples are generated on the proxy models to attack the target models, and the target models generate wrong outputs. The migratable image countercheck sample generated by the attack method has low cost, is not limited by target model information any more, and has larger threat under the real deployment environment. Therefore, the DNNs model for testing the target has stronger practical significance for the security of the transferable images against sample attacks.
At present, some methods for generating a migratable image countermeasure sample have been proposed, among which the effects are better: (1) the MI-FGSM method based on momentum enables the optimization direction to be more stable by accumulating the last gradient in the iterative process; (2) randomly scaling the input image sample based on the DMI of the input transformation to increase the diversity of the input image sample; (3) based on the TMI of the gradient smoothing, the generated gradient is subjected to Gaussian smoothing by using a Gaussian convolution kernel, so that the discrimination area of the model on the input image sample is increased; (4) based on the VMI adjusted by the neighbor gradient variance, a plurality of neighbor image samples of the input image samples are generated, and the gradient difference between the gradients of the neighbor image samples and the current input image sample is obtained to be used as the correction direction of the next gradient, so that the optimization direction is further stabilized. Although the above methods have good effects, the following problems are common to these methods:
(1) the test method of the neural network aiming at the transferable image resisting sample also has the condition that the error finding capability is weak, wherein the unstable optimization direction is one of the main reasons of the weak error finding capability;
(2) the current testing method blindly pursues the capability of finding errors but ignores the disturbed 'imperceptibility' constraint condition, resulting in poor quality of the generated image sample.
Disclosure of Invention
The purpose of the invention is as follows: the safety threat of the migratable image confrontation sample to the deployment of the deep neural network model in the real environment is larger, and the problems encountered in the testing process of the current migratable image confrontation sample generation method are considered. The invention provides a transferable image anti-sample generation and deep neural network testing method and system, which improve two aspects of improving the capability of the transferable image anti-sample finding error from the gradient direction of stable disturbance and improving the quality of the generated image sample by dynamic generation of disturbance constraint.
The technical scheme is as follows: in order to achieve the above object, the method for generating a transferable image confrontation sample according to the present invention comprises the following steps:
setting parameter information including a maximum disturbance change value epsilon, a maximum iteration number T, the disturbance radius type number N of adjacent image samples, the number M of adjacent image samples on each radius, a radius coefficient beta, a proportionality coefficient maximum value inter and an attenuation coefficient mu; generating a disturbance step length lambda which is epsilon/T, an amplification coefficient alpha of the disturbance of the neighboring image sample which is lambda multiplied by beta, and different proportional coefficient groups R of neighboring disturbance radius relative to the amplification coefficient alpha; acquiring an input image sample, and generating a disturbance constraint range of the image sample according to the relation between each pixel and surrounding pixels;
iteratively generating a migratable image countermeasure sample of the input image samples according to the following steps:
generating diversified neighbor image samples of the input image samples, calculating the gradient of the proxy model relative to the neighbor image samples, carrying out standardization operation on the gradient, taking the average of the gradient as a disturbance direction, and obtaining the disturbance of current iteration according to the disturbance direction and the disturbance step length; wherein the jth neighbor image sample x on the r radius of the ith iteration roundi,r,jIs denoted by xi,r,j=xi+R(r)×α×pj,xiFor the image samples input for the ith iteration, R (R) represents the R-th value of the set of scale factors R, pjA random tensor which is uniformly distributed and has the same size as the image sample is subjected to;
adding the generated disturbance into the input image sample, and finely adjusting the image sample added with the disturbance according to disturbance constraint to enable the image sample to meet constraint conditions, so as to obtain an intermediate image sample of current iteration;
and judging whether the iteration number reaches the maximum iteration number T, if so, taking the intermediate image sample obtained by the current iteration as a transferable image countermeasure sample, otherwise, taking the intermediate image sample as input again, and performing the next iteration.
Further, the disturbance constraint range of the image sample is generated according to the following method:
acquiring neighborhood pixel values of each pixel point of an input image sample and sequencing;
and taking the minimum value of the maximum value in the neighborhood pixels of each pixel of the input image sample as the upper and lower limit values of the current pixel disturbance, namely the disturbance constraint of the pixels in the image.
Further, the method for obtaining the perturbation of the current iteration according to the gradient direction and the perturbation step length includes:
normalizing the gradient of the proxy model on each radius relative to each adjacent image sample, calculating the gradient average of M adjacent image samples, and averaging the gradients of N different radii in the ith iteration as the final gradient gradi
Accumulating the gradient of the previous iteration and the gradient of the current iteration by using a gradient accumulation strategy based on momentum as a final gradient value n of the ith iterationi
Figure BDA0003615910590000031
According to a gradient value niAnd determining the disturbance direction of each pixel point by a sign function sign (), and multiplying the disturbance direction by a corresponding disturbance step length to be used as the disturbance generated by the iteration of the round.
Further, R ═ link (0, inter, N +1), and link indicates that N +1 numbers are generated at equal distances from 0 to inter.
The invention provides a deep neural network testing method based on a transferable image confrontation sample, which comprises the following steps:
acquiring an image data set corresponding to a deep neural network to be tested and corresponding label information;
reading the collected image data set and the label, performing data preprocessing, and putting the processed image data into a clean image sample set;
training an agent model with the same function as the deep neural network model to be tested by using the preprocessed training data and storing the agent model;
obtaining input image samples from a clean image sample set, using the transferable image confrontation sample generation method to transfer image confrontation samples,
saving the image confrontation sample data set to be a test case;
and performing model testing on the target deep neural network model by using the test cases in the immigrable image confrontation sample set.
Further, the test method further comprises the steps of preprocessing the read image sample data and the label into a size and numerical range required by model input; after the generation method of the transferable image countermeasure sample is adopted, the data form of the original image sample is restored to be used as a test case.
Further, when the transferable image confrontation sample is used for testing the model to be tested, the performance indexes of the model to be tested on the clean image sample set and the performance indexes of the model to be tested on the transferable image confrontation sample are respectively counted, and whether the difference value between the two indexes is smaller than a specified threshold value is used as a basis for judging whether the target model is safe and reliable.
Based on the same inventive concept, the invention provides a transferable image confrontation sample generation system, which comprises:
the parameter configuration module is used for setting parameter information, and the parameter information comprises a maximum disturbance change value epsilon, a maximum iteration number T, the disturbance radius type number N of adjacent image samples, the number M of adjacent image samples on each radius, a radius coefficient beta, a proportionality coefficient maximum value inter and an attenuation coefficient mu; generating a disturbance step length lambda is epsilon/T, an amplification coefficient alpha of the disturbance of the neighboring image sample is lambda multiplied by beta, and different proportional coefficient groups R of neighboring disturbance radius relative to the amplification coefficient alpha;
the constraint generation module is used for acquiring an input image sample and generating a disturbance constraint range of the image sample according to the relation between each pixel and surrounding pixels;
and a sample generation module for iteratively generating a migratable image countermeasure sample of the input image samples according to the steps of: generating diversified neighbor image samples of the input image samples, calculating the gradient of the proxy model relative to the neighbor image samples, carrying out standardization operation on the gradient, taking the average of the gradient as a disturbance direction, and obtaining the disturbance of current iteration according to the disturbance direction and the disturbance step length; wherein the ith iteration round isJth neighbor image sample x on the r radiusi,r,jIs denoted by xi,r,j=xi+R(r)×α×pj,xiFor the image samples input for the ith iteration, R (R) represents the R-th value of the set of scale factors R, pjA random tensor which is uniformly distributed and has the same size as the image sample is subjected to; adding the generated disturbance into the input image sample, and finely adjusting the image sample added with the disturbance according to disturbance constraint to enable the image sample to meet constraint conditions, so as to obtain an intermediate image sample of current iteration; and judging whether the iteration number reaches the maximum iteration number T, if so, taking the intermediate image sample obtained by the current iteration as a transferable image countermeasure sample, otherwise, taking the intermediate image sample as input again, and performing the next iteration.
Based on the same inventive concept, the invention provides a deep neural network testing system based on a transferable image confrontation sample, which comprises various modules of a transferable image confrontation sample generating system, and the following modules:
the preprocessing module is used for acquiring an image data set corresponding to the deep neural network to be tested and corresponding label information; reading the collected image data set and the label, performing data preprocessing, and putting the processed image data into a clean image sample set;
the agent model training module is used for training and storing an agent model with the same function as the deep neural network model to be tested by using the preprocessed training data;
the test case generation module is used for acquiring an input image sample from a clean image sample set, generating a transferable image resisting sample of the image sample through the transferable image resisting sample generation system, and storing the transferable image resisting sample into the transferable image resisting sample data set as a test case;
and the model testing module is used for performing model testing on the target deep neural network model by using the test cases in the immigrable image countermeasure sample set.
Based on the same inventive concept, the invention provides a computer system, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the generation method of the transferable image confrontation sample or the deep neural network testing method based on the transferable image confrontation sample when being loaded to the processor.
Has the beneficial effects that: in order to improve the capability of the testing method to find errors, the invention uses a neighboring image sample diversification operation and a gradient standardization operation to stabilize the optimized direction. Wherein the neighbor diversification operation is to find neighbor image samples at a plurality of perturbation radii to increase the diversity of the neighbor image samples. The purpose of gradient normalization is to have each gradient produced by a neighboring image sample have the same amount of contribution to the optimization process. In addition, in order to improve the quality of the generated image sample, the invention determines the constraint range of the disturbance according to the relation between each pixel and the surrounding pixels in the image, so that the disturbance is only added to the edge part of the image without damaging the smooth area of the image. Compared with the prior art, the method can find errors caused by more transferable image confrontation samples in the model to be tested, namely the generation success rate of the transferable image confrontation samples is higher, and meanwhile, the generation quality of the image is ensured.
Drawings
FIG. 1 is a flow chart of generation of a transferable image fighting sample according to an embodiment of the invention.
FIG. 2 is a flowchart illustrating testing of a deep neural network according to an embodiment of the present invention.
Fig. 3 is a detailed flowchart of a method according to an embodiment of the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, as various equivalent modifications of the invention will occur to those skilled in the art upon reading the present disclosure and fall within the scope of the appended claims.
As shown in fig. 1, a method for generating a transferable image confrontation sample disclosed in the embodiment of the present invention mainly includes: firstly, setting related parameter information, including a maximum disturbance change value epsilon, a maximum iteration number T, the disturbance radius variety number N of adjacent image samples, the number M of the adjacent image samples on each radius, a radius coefficient beta, a proportionality coefficient maximum value inter and an attenuation coefficient mu, and generating a disturbance step length lambda which is epsilon/T, an amplification coefficient alpha of the disturbance of the adjacent image samples which is lambda multiplied beta, and a proportionality coefficient group R of different adjacent disturbance radii relative to the amplification coefficient alpha; then according to the input image sample, generating a disturbance constraint range of the image sample according to the relation between each pixel and the surrounding pixels; generating a transferable image countermeasure sample of the input image sample by iteration, generating diversified neighbor image samples of the input image sample in each iteration, calculating the gradient of the proxy model relative to the neighbor image samples, carrying out standardization operation on the gradient, taking the average of the gradient as a disturbance direction, and obtaining the disturbance of the current iteration according to the disturbance direction and the disturbance step length; adding the generated disturbance into the input image sample, and finely adjusting the image sample added with the disturbance according to disturbance constraint to enable the image sample to meet constraint conditions, so as to obtain an intermediate image sample of current iteration; and judging whether the iteration number reaches the maximum iteration number T, if so, taking the intermediate image sample obtained by the current iteration as a transferable image countermeasure sample, otherwise, taking the intermediate image sample as input again, and performing the next iteration.
As shown in fig. 2, a method for generating a deep neural network test case based on a migratable image confrontation sample disclosed in an embodiment of the present invention mainly includes: firstly, acquiring an image data set corresponding to a deep neural network to be tested and corresponding label information;
reading the collected image data set and the label, performing data preprocessing, and putting the processed image data into a clean image sample set; then training an agent model with the same function as the deep neural network model to be tested by using the preprocessed training data and storing the agent model; then obtaining an input image sample from the clean image sample set, generating a transferable image confrontation sample of the image sample by adopting the transferable image confrontation sample generation method,
saving the image confrontation sample data set which can be migrated into the image confrontation sample data set as a test case; and finally, performing model testing on the target deep neural network model by using the test cases in the migratable image confrontation sample set.
The following describes, with reference to fig. 3, a method for generating a transferable image countermeasure sample and detailed steps of a deep neural network testing method, which are disclosed in the embodiments of the present invention, by taking a deep neural network for image classification as an example, and assuming that a target image classification model ResNet152 trained by ImageNet needs to be tested, the specific steps are as follows:
step 1: acquiring an ImageNet image classification data set, which mainly comprises two aspects:
step 11: downloading training sets, verification sets, test sets and their corresponding labels using an ImageNet data set download website (https:// image-net. org/gallens/LSVRC/2012/index. php);
step 12: and decompressing and storing required data from the corresponding compressed file locally.
Step 2: reading the collected image data set and the corresponding label, preprocessing the read data, and putting the test data into a clean image sample set, wherein the specific process is as follows:
step 21: respectively reading a training set and a test set of ImageNet and corresponding data labels into a matrix tensor by using an image reading function of Pythroch;
step 22: cutting the length and width of the read image sample to make the length and width of the image sample have the same size with the input of the target neural network model and are all 299; the pixel values in the image matrix tensor are mapped from [0,255] into the range of [ -1.0,1.0 ].
Step 23: and putting the original test set into a clean sample set, wherein the training set is used for training the deep neural network model for image classification.
And step 3: the preprocessed ImageNet training set was used to train the deep neural network model inclusion-V3 for image classification.
And 4, step 4: the method comprises the following steps of setting parameter configuration information of a transferable image confrontation sample generation method:
step 41: setting the maximum disturbance change value epsilon to 16.0, the maximum iteration number T to 10, the number of the disturbance radius types N of the neighboring image samples to 6, the number M of the neighboring image samples to 10 in each radius, the radius coefficient beta to 1.8, the maximum value inter of the proportionality coefficient to 3.0, and the attenuation coefficient mu to 0.5.
Step 42: generating a disturbance step length lambda ═ epsilon/T; generating an amplification factor alpha of disturbance of a neighboring image sample, wherein the amplification factor alpha is lambda multiplied by beta; and generating different proportional coefficient groups R of adjacent disturbance radius relative to the amplification coefficient alpha, wherein the Linspace represents that the number of N +1 is generated at equal distance from 0 to inter and is stored in the array R. R (0) represents the proportion for a perturbation radius of 0 and should be discarded, so we use N proportion values from 1 to N subscripts.
R=Linspace(0,inter,N+1)
And 5: the method for generating the dynamic disturbance constraint comprises the following specific steps:
step 51: obtaining x0Inputting 8 neighborhood pixels of each pixel point in an image matrix tensor and sequencing;
step 52: taking the minimum value of the maximum value in the 8 adjacent pixels of each pixel as the upper and lower limit values of the disturbance constraint of the current pixel, namely x0And (3) perturbation constraint of each pixel in the image matrix tensor, and storing the perturbation constraint of each pixel into the tensor E.
Step 6: obtaining the gradient of a proxy model relative to diversified neighbor image samples of an input image sample according to the input image sample and the proxy model, carrying out standardization operation on the gradient, taking the average of the gradient as a disturbance direction, and obtaining the image sample disturbance of the current iteration round according to the disturbance direction and the disturbance step length, wherein the method specifically comprises the following steps:
step 61: obtaining a current input image sample xiAnd i represents the current iteration round. On the r-th radius xiOne of the M neighboring image samples of (a) is xi,r,jJ represents the j-th neighbor image sample, and j is not less than 0<M and j are positive integers. p is a radical ofj-U (-2.0,2.0), and pjHaving the same shape as the input tensor, for generating a mean perturbation radiusA perturbation of 1.0.
xi,r,j=xi+R(r)×α×pj
Inputting M adjacent image samples into the proxy model obtained by training in step 3, and obtaining the gradient of the loss function of increment-v 3 relative to each adjacent image sample by reverse propagation
Figure BDA0003615910590000085
The loss function here uses the cross-entropy loss J (x)i,r,j,y;θ)=-1y·log(softmax(l(xi,r,j;θ))),l(xi,r,j(ii) a Theta) represents xi,r,jFinal generated logits output results input into the inclusion-v 3 model, θ representing the model parameters of the inclusion-v 3, y representing the label value of the input image sample, 1yIndicating the One-Hot code (One-Hot) corresponding to the sample label y. To stabilize the direction of the perturbation, we normalized the loss function of inclusion-v 3 against the gradient of each neighboring image sample, yielding:
Figure BDA0003615910590000081
wherein
Figure BDA0003615910590000082
To represent
Figure BDA0003615910590000083
L of2And (4) a norm value.
So the average of the gradients of the M neighboring image samples on the r-th radius is:
Figure BDA0003615910590000084
step 62: the gradients of the N different radii are averaged to obtain the final gradient. In order for the gradients of each radius to have the same contribution to the optimized result, we need to do for each gradi,rNormalization is performed so that the ith iterationGradient generated on behalf of the image sample:
Figure BDA0003615910590000091
and step 63: accumulating the gradient of the previous iteration and the gradient of the current iteration by using a gradient accumulation strategy based on momentum as a final gradient value n of the ith iterationiTo escape the local optimum:
Figure BDA0003615910590000092
and 7: obtaining pixel perturbations and adding to the input image sample xiIn (1). According to the gradient niAnd the sign function sign () can determine the disturbance direction of each pixel point, multiply the corresponding step length to be used as the disturbance generated by the iteration of the current round, and finally carry out fine tuning on the Clip of the image sample added with the disturbance by using the disturbance constraint E generated in the step 5E,ClipEThe finger truncates image pixel values that exceed the perturbation constraint range so that each pixel value satisfies constraint E. So the image sample generated in the current iteration is the input image sample x of the next iterationi+1Comprises the following steps:
xi+1=ClipE(xi+λ*sign(ni))
and 8: judging whether the maximum iteration number T can be reached currently, if so, performing the step 9, otherwise, repeating the steps 6-8;
and step 9: restoring the generated image sample into the data form of the original image sample, namely mapping from [ -1.0, -1.0] to the pixel value [0,255], and taking the generated migratable image countermeasure sample as a test case of the target model;
step 10: ResNet152 is model tested using migratable image countermeasure samples. Respectively counting the classification accuracy ACRC of the ResNet152 model on the clean image sample setcleanAnd classification accuracy ACRC on migrateable resist image samplesadvIf ACRC does not existcleanAnd ACRCadvIf the difference value of (d) is less than the required safety threshold Th of 0.5, it indicates that the safety of the model is poor, otherwise, it indicates that the safety requirement is met, and the model can be deployed through testing.
Based on the same inventive concept, the embodiment of the invention provides a transferable image confrontation sample generation system, which comprises: the parameter configuration module is used for setting parameter information, and the parameter information comprises a maximum disturbance change value epsilon, a maximum iteration number T, the disturbance radius type number N of adjacent image samples, the number M of adjacent image samples on each radius, a radius coefficient beta, a proportionality coefficient maximum value inter and an attenuation coefficient mu; generating a disturbance step length lambda which is epsilon/T, an amplification coefficient alpha of the disturbance of the neighboring image sample which is lambda multiplied by beta, and different proportional coefficient groups R of neighboring disturbance radius relative to the amplification coefficient alpha; the constraint generation module is used for acquiring an input image sample and generating a disturbance constraint range of the image sample according to the relation between each pixel and surrounding pixels; and a sample generation module for iteratively generating a migratable image countermeasure sample of the input image samples according to the steps of: generating diversified neighbor image samples of the input image samples, calculating the gradient of the proxy model relative to the neighbor image samples, carrying out standardization operation on the gradient, taking the average of the gradient as a disturbance direction, and obtaining the disturbance of current iteration according to the disturbance direction and the disturbance step length; wherein the jth neighbor image sample x on the r radius of the ith iteration roundi,r,jIs denoted by xi,r,j=xi+R(r)×α×pj(ii) a Adding the generated disturbance into the input image sample, and finely adjusting the image sample added with the disturbance according to disturbance constraint to enable the image sample to meet constraint conditions, so as to obtain an intermediate image sample of current iteration; and judging whether the iteration number reaches the maximum iteration number T, if so, taking the intermediate image sample obtained by the current iteration as a transferable image countermeasure sample, otherwise, taking the intermediate image sample as input again, and performing the next iteration.
Based on the same inventive concept, the deep neural network testing system based on the transferable image confrontation sample provided by the embodiment of the invention comprises the modules of the system for generating the transferable image confrontation sample, and the following modules: the preprocessing module is used for acquiring an image data set corresponding to the deep neural network to be tested and corresponding label information; reading the collected image data set and the label, performing data preprocessing, and putting the processed image data into a clean image sample set; the agent model training module is used for training and storing an agent model with the same function as the deep neural network model to be tested by using the preprocessed training data; the test case generation module is used for acquiring an input image sample from a clean image sample set, generating a transferable image resisting sample of the image sample through the transferable image resisting sample generation system, and storing the transferable image resisting sample into the transferable image resisting sample data set to serve as a test case; and the model testing module is used for performing model testing on the target deep neural network model by using the test cases in the immigrable image countermeasure sample set.
Based on the same inventive concept, the embodiment of the present invention provides a computer system, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the computer program, when being loaded into the processor, implements the method for generating the transferable image countermeasure sample or the method for testing a deep neural network based on the transferable image countermeasure sample.

Claims (10)

1. A transferable image confrontation sample generation method is characterized by comprising the following steps:
setting parameter information including a maximum disturbance change value epsilon, a maximum iteration number T, the disturbance radius type number N of adjacent image samples, the number M of adjacent image samples on each radius, a radius coefficient beta, a proportionality coefficient maximum value inter and an attenuation coefficient mu; generating a disturbance step length lambda is epsilon/T, an amplification coefficient alpha of the disturbance of the neighboring image sample is lambda multiplied by beta, and different proportional coefficient groups R of neighboring disturbance radius relative to the amplification coefficient alpha; acquiring an input image sample, and generating a disturbance constraint range of the image sample according to the relation between each pixel and surrounding pixels;
iteratively generating a migratable image countermeasure sample of the input image samples according to the following steps:
generating diversified neighbor image samples of the input image samples, calculating the gradient of the proxy model relative to the neighbor image samples, carrying out standardization operation on the gradient, taking the average of the gradient as a disturbance direction, and obtaining the disturbance of current iteration according to the disturbance direction and the disturbance step length; wherein the jth neighbor image sample x on the r radius of the ith iteration roundi,r,jIs denoted by xi,r,j=xi+R(r)×α×pj,xiFor the image samples input for the ith iteration, R (R) represents the R-th value of the set of scale factors R, pjA random tensor that is uniformly distributed with the same size as the image sample is subjected to;
adding the generated disturbance into the input image sample, and finely adjusting the image sample added with the disturbance according to disturbance constraint to enable the image sample to meet constraint conditions, so as to obtain an intermediate image sample of current iteration;
and judging whether the iteration number reaches the maximum iteration number T, if so, taking the intermediate image sample obtained by the current iteration as a transferable image countermeasure sample, otherwise, taking the intermediate image sample as input again, and performing the next iteration.
2. The method of claim 1, wherein the disturbance constraint range of the image sample is generated according to the following method:
acquiring neighborhood pixel values of each pixel point of an input image sample and sequencing;
and taking the minimum value of the maximum value in the neighborhood pixels of each pixel of the input image sample as the upper and lower limit values of the current pixel disturbance, namely the disturbance constraint of the pixels in the image.
3. The method for generating the migratable image countermeasure sample according to claim 1, wherein the method for obtaining the perturbation of the current iteration according to the gradient direction and the perturbation step size comprises:
for each radius, the proxy model is relative to each adjacent imageNormalizing the gradient of the sample, calculating the gradient average of M adjacent image samples, and averaging N gradients with different radiuses in the ith iteration round to obtain the final gradient gradi
Accumulating the gradient of the previous iteration and the gradient of the current iteration by using a gradient accumulation strategy based on momentum as a final gradient value n of the ith iterationi
Figure FDA0003615910580000021
According to a gradient value niAnd determining the disturbance direction of each pixel point by a sign function sign (), and multiplying the disturbance direction by a corresponding disturbance step length to be used as the disturbance generated by the iteration of the current round.
4. The method of claim 1, wherein the image-resistant sample is generated by a portable image player,
R=Linspace(0,inter,N+1)
linspace represents that N +1 numbers are generated equidistantly from 0 to inter.
5. A deep neural network testing method based on a transferable image confrontation sample is characterized by comprising the following steps:
acquiring an image data set corresponding to a deep neural network to be tested and corresponding label information;
reading the collected image data set and the label, performing data preprocessing, and putting the processed image data into a clean image sample set;
training an agent model with the same function as the deep neural network model to be tested by using the preprocessed training data and storing the agent model;
obtaining an input image sample from a clean image sample set, generating a transferable image fighting sample of the image sample using the transferable image fighting sample generation method of claim 1,
saving the image confrontation sample data set which can be migrated into the image confrontation sample data set as a test case;
and performing model testing on the target deep neural network model by using the test cases in the immigrable image confrontation sample set.
6. The method for testing the deep neural network based on the migratable image countermeasure samples according to claim 5, further comprising preprocessing the read image sample data and the label into a size and numerical value range required by model input; after the transferable image countermeasure sample generation method according to claim 1 is adopted, the data form of the original image sample is restored as a test case.
7. The method as claimed in claim 5, wherein when the transferable image confrontation sample is used to perform model test on the target model, the model performance indexes of the model to be tested on the clean image sample set and the model performance indexes of the model to be tested on the transferable image confrontation sample are respectively counted, and whether the difference between the two accuracy rates is smaller than a specified threshold value is used as an index for judging whether the model to be tested is safe and reliable.
8. A migratable image confrontation sample generation system, comprising:
the parameter configuration module is used for setting parameter information, and the parameter information comprises a maximum disturbance change value epsilon, a maximum iteration number T, the disturbance radius type number N of adjacent image samples, the number M of adjacent image samples on each radius, a radius coefficient beta, a proportionality coefficient maximum value inter and an attenuation coefficient mu; generating a disturbance step length lambda which is epsilon/T, an amplification coefficient alpha of the disturbance of the neighboring image sample which is lambda multiplied by beta, and different proportional coefficient groups R of neighboring disturbance radius relative to the amplification coefficient alpha;
the constraint generation module is used for acquiring an input image sample and generating a disturbance constraint range of the image sample according to the relation between each pixel and surrounding pixels;
and a sample generation module for iteratively generating according to the following stepsMigratable image countermeasure samples into input image samples: generating diversified neighbor image samples of the input image samples, calculating the gradient of the proxy model relative to the neighbor image samples, carrying out standardization operation on the gradient, taking the average of the gradient as a disturbance direction, and obtaining the disturbance of current iteration according to the disturbance direction and the disturbance step length; wherein the jth neighbor image sample x on the r radius of the ith iteration roundi,r,jIs denoted by xi,r,j=xi+R(r)×α×pj,xiFor the image samples input for the ith iteration, R (R) represents the R-th value of the set of scale factors R, pjA random tensor which is uniformly distributed and has the same size as the image sample is subjected to; adding the generated disturbance into the input image sample, and finely adjusting the image sample added with the disturbance according to disturbance constraint to enable the image sample to meet constraint conditions, so as to obtain an intermediate image sample of current iteration; and judging whether the iteration number reaches the maximum iteration number T, if so, taking the intermediate image sample obtained by the current iteration as a transferable image countermeasure sample, otherwise, taking the intermediate image sample as input again, and performing the next iteration.
9. A system for testing a deep neural network based on a migratable image resistance sample, comprising the modules of a migratable image resistance sample generating system as claimed in claim 8, and:
the preprocessing module is used for acquiring an image data set corresponding to the deep neural network to be tested and corresponding label information; reading the collected image data set and the label, performing data preprocessing, and putting the processed image data into a clean image sample set;
the agent model training module is used for training and storing an agent model with the same function as the deep neural network model to be tested by using the preprocessed training data;
the test case generation module is used for acquiring an input image sample from a clean image sample set, generating a transferable image resisting sample of the image sample through the transferable image resisting sample generation system, and storing the transferable image resisting sample into the transferable image resisting sample data set as a test case;
and the model testing module is used for performing model testing on the target deep neural network model by using the test cases in the immigrable image countermeasure sample set.
10. A computer system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the computer program when loaded into the processor implements a method for generating a migratable image countermeasure sample according to any one of claims 1 to 4 or a method for deep neural network testing based on migratable image countermeasure samples according to any one of claims 5 to 7.
CN202210444185.XA 2022-04-26 2022-04-26 Transferable image confrontation sample generation and deep neural network testing method and system Pending CN114741310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210444185.XA CN114741310A (en) 2022-04-26 2022-04-26 Transferable image confrontation sample generation and deep neural network testing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210444185.XA CN114741310A (en) 2022-04-26 2022-04-26 Transferable image confrontation sample generation and deep neural network testing method and system

Publications (1)

Publication Number Publication Date
CN114741310A true CN114741310A (en) 2022-07-12

Family

ID=82283940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210444185.XA Pending CN114741310A (en) 2022-04-26 2022-04-26 Transferable image confrontation sample generation and deep neural network testing method and system

Country Status (1)

Country Link
CN (1) CN114741310A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115392456A (en) * 2022-08-30 2022-11-25 北京交通大学 High-mobility countermeasure sample generation method for asymptotic normality of fusion optimization algorithm
CN116303088A (en) * 2023-04-17 2023-06-23 南京航空航天大学 Test case ordering method based on deep neural network cross entropy loss

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115392456A (en) * 2022-08-30 2022-11-25 北京交通大学 High-mobility countermeasure sample generation method for asymptotic normality of fusion optimization algorithm
CN115392456B (en) * 2022-08-30 2023-10-10 北京交通大学 Fusion optimization algorithm asymptotically normal high migration countermeasure sample generation method
CN116303088A (en) * 2023-04-17 2023-06-23 南京航空航天大学 Test case ordering method based on deep neural network cross entropy loss

Similar Documents

Publication Publication Date Title
CN114741310A (en) Transferable image confrontation sample generation and deep neural network testing method and system
Cao et al. Heteroskedastic and imbalanced deep learning with adaptive regularization
CN109597043B (en) Radar signal identification method based on quantum particle swarm convolutional neural network
Jin et al. On Evolutionary Optimization with Approximate Fitness Functions.
CN112215292B (en) Image countermeasure sample generation device and method based on mobility
CN109635763B (en) Crowd density estimation method
CN111126134A (en) Radar radiation source deep learning identification method based on non-fingerprint signal eliminator
CN111259397B (en) Malware classification method based on Markov graph and deep learning
CN111178504B (en) Information processing method and system of robust compression model based on deep neural network
CN111062036A (en) Malicious software identification model construction method, malicious software identification medium and malicious software identification equipment
CN112926661A (en) Method for enhancing image classification robustness
CN116912568A (en) Noise-containing label image recognition method based on self-adaptive class equalization
CN112217650A (en) Network blocking attack effect evaluation method, device and storage medium
CN115496144A (en) Power distribution network operation scene determining method and device, computer equipment and storage medium
CN117131348B (en) Data quality analysis method and system based on differential convolution characteristics
CN111950635A (en) Robust feature learning method based on hierarchical feature alignment
CN111797979A (en) Vibration transmission system based on LSTM model
CN116305103A (en) Neural network model backdoor detection method based on confidence coefficient difference
Shimoji et al. Data clustering with entropical scheduling
CN115620100A (en) Active learning-based neural network black box attack method
CN115019102A (en) Construction method and application of confrontation sample generation model
Krithivasan et al. Efficiency attacks on spiking neural networks
CN112417377A (en) Military reconnaissance system efficiency evaluation method
CN111382800B (en) Multi-label multi-classification method suitable for sample distribution imbalance
CN113052314B (en) Authentication radius guide attack method, optimization training method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination