CN114139631B - Multi-target training object-oriented selectable gray box countermeasure sample generation method - Google Patents

Multi-target training object-oriented selectable gray box countermeasure sample generation method Download PDF

Info

Publication number
CN114139631B
CN114139631B CN202111465982.8A CN202111465982A CN114139631B CN 114139631 B CN114139631 B CN 114139631B CN 202111465982 A CN202111465982 A CN 202111465982A CN 114139631 B CN114139631 B CN 114139631B
Authority
CN
China
Prior art keywords
modification
point
area
picture
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111465982.8A
Other languages
Chinese (zh)
Other versions
CN114139631A (en
Inventor
关志涛
陈子民
王俪蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN202111465982.8A priority Critical patent/CN114139631B/en
Publication of CN114139631A publication Critical patent/CN114139631A/en
Application granted granted Critical
Publication of CN114139631B publication Critical patent/CN114139631B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a method for generating an countermeasure sample of a gray box which can be selected for a multi-target training object, comprising the following steps: step 1: pretreatment: setting a modification object, and setting the number of differential evolution iterations and the population size; step 2: image segmentation: the main purpose of this step is to divide the picture to be modified into a plurality of small areas with similar colors according to the similarity of adjacent colors in the pixel point direction, if the colors of the pixel points in the adjacent areas are similar, the pixel points in the adjacent areas are indicated to describe the same thing; step 3: searching the best modification area: since the picture is directly searched, the searching range is relatively large, and the main purpose of the step is to find the optimal modification area so as to determine the range for finer modification; step 4: accurately searching a modification point by differential evolution; step 5: eliminating unnecessary alternative points; step 6: and (3) countertraining, namely merging the countermeasures into a training set, and performing countermeasures to improve the detection confidence of the selected target.

Description

Multi-target training object-oriented selectable gray box countermeasure sample generation method
Technical Field
The invention relates to a method for generating an countermeasure sample for a multi-target training object-oriented selectable gray box, and belongs to the technical field of countermeasure sample generation methods.
Background
Object detection is one of the most important tasks in the field of computer vision. Target detection is generally classified into 2 classes. The first class firstly calculates candidate frames according to an algorithm, and then classifies the contents of the candidate frames. Such as: R-CNN, fast R-CNN. The other is to perform localization and classification by a deep convolutional neural network, such as yolo.
The challenge sample is an important research object in artificial intelligence safety, and specific fine modification of the input image can reduce the confidence of model classification and even fail classification.
The gray box is a box which is not clear of the internal parameters and structure of the model and only knows one state of the model output.
The deep convolutional neural network is widely applied in the field of computer vision, but the countermeasure sample generated by specific modification of the image can cause the classification of the deep convolutional neural network to be wrong and the target detection to be inaccurate. If the countermeasure sample is added into the model training sample, the model is subjected to countermeasure training after the original data set is trained, so that the anti-interference performance of the model can be improved, and the accuracy of the model in use under severe environments is improved. The current mainstream challenge sample generation method is basically based on white boxes, i.e. the internal structure of the model is known, such as: FGSM, BIM, ILCM, and the like, and obtaining the modification direction of the countermeasure sample through back propagation. Alternatively, as follows: JSMA generates challenge samples using key points in the saliency map by forming a gradient-based saliency map. However, in daily cases, we are likely to know only the confidence after image classification, and even only the confidence of the correct class, through the deep convolutional neural network. That is, the deep convolutional neural network is a gray box model, and we have no clear structure of the neural network, internal parameters, and cannot specify the generation direction of the challenge sample through the back propagation algorithm. Moreover, counter-propagating-based anti-sample generation algorithms tend to be large in scale for modification of artwork, such as algorithms like FGSM, BIM, ILCM, which modify almost all pixels in a picture. This makes the challenge samples easily recognizable to the human eye if an improper disturbance step is selected, even if the disturbance step is too large as a garbage sample, interfering with the challenge training.
Disclosure of Invention
The invention provides a method for generating a countermeasure sample of a gray box which is oriented to a multi-target training object and can be selected, so that the problem that an existing target detection model is in a gray box state, a user hopes to improve the detection accuracy of certain important types, the model can detect the accuracy of certain important types in a severe environment, and meanwhile, huge loss is not hoped to be caused to the detection effect of non-key targets.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method for generating an countermeasure sample for a multi-target training object selectable gray box comprises the following steps:
step 1: pretreatment: setting a modification object, and setting the number of differential evolution iterations and the population size;
step 2: image segmentation: the main purpose of this step is to divide the picture to be modified into a plurality of small areas with similar colors according to the similarity of adjacent colors in the pixel point direction. If the colors of the pixel points in the adjacent areas are similar, the pixel points in the adjacent areas describe the same thing;
step 3: searching the best modified region by using a differential evolution algorithm: if the picture itself is searched directly, the search range is relatively large, so the main purpose of this step is to find the optimal modification area to determine the range for finer modification;
step 4: the differential evolution accurately searches and records the position of the modification point and the disturbance mode:
step 4.1: searching the coordinates and disturbance modes of the current optimal modification point in the optimal modification area by utilizing a differential evolution algorithm;
step 4.2: recording the coordinates of the optimal modification points, the R, G, B channel modification mode and the target confidence coefficient total descending degree, updating the target confidence coefficient total upper limit, and taking the current optimal modification point as an alternative modification point;
step 4.3: adding the effect of the optimal modification point on the original image, updating the original image, and taking the modified image as the original image of the next disturbance;
step 4.4: repeating the step 3 until the modification effect reaches an ideal state, wherein the total target confidence coefficient is 0, and all the appointed targets are not correctly detected or the detection classification fails;
step 5: eliminating the unnecessary alternative points:
step 5.1, sorting each alternative modification point from small to large according to the reduction degree of the total confidence coefficient of the disturbance target;
step 5.2, restoring the picture according to the alternative modification point, if the disturbance effect still works, eliminating the alternative modification point, and finally generating an countermeasure sample;
step 6: challenge training:
and 6.1, merging the countermeasure sample into a training set to perform countermeasure training.
In the step 5, the following steps: and sequentially restoring the pictures at the alternative modification points according to the descending order of the total confidence, and eliminating the alternative modification points if the disturbance effect is not affected.
Challenge training, after model training is completed. And combining the countermeasure samples generated according to the specific rules into a model training set, and performing countermeasure training on the model, so that the correctness of the model in a severe environment can be enhanced.
In the step 1, the trainer sets the target type with enhanced confidence, and establishes the confidence sum of the current category to be modified and the confidence of other categories by inputting the original picture on the model.
In step 1, a trainer sets the category which can be identified by the modified object A, A epsilon model, utilizes the deep convolutional neural network model to establish the confidence sum of the category to be modified currently,wherein pic refers to an original picture for generating an antigen sample, and N refers to the total number of targets to be perturbed in the picture.
The step 2 comprises the following steps:
step 2.1: establishing a maximum change value max_dist which can be born by adjacent pixel points in the area;
step 2.2: in order to reduce noise in the picture and maintain the characteristics of the image edge, the whole image is processed by utilizing Gaussian Laplace convolution;
step 2.3: image segmentation is carried out under the constraint condition that adjacent points can bear the maximum change value by utilizing the concept of merging and searching;
step 2.4: a bounding box is formed for each segmented region representing the entire region.
The step 2.3 is as follows: defining a father array to represent the directly associated parent region number of each region, initializing the father array, wherein the father array cannot have repeated elements in the initialization process, and each pixel point in the image represents one region. And searching the father array through recursion find_father operation, and finding the area number of a certain area finally belonged to. Updating the father array after inquiring by utilizing path compression, and directly taking the final belonging area number of the area as the directly associated father area number of the current area;
calculating the change degree of the current point in the four directions and adjacent points, wherein the change degree is measured by the Euclidean distance of R, G, B values of the two points, and R nowpoint R is the R value of the current point pixel, R neighborpoint R value, G for adjacent point pixels nowpoint G is the G value of the current point pixel, G neighborpoint G value, B for adjacent point pixels nowpoint B is the value of B of the current point pixel, B neighborpoint B value for neighboring point pixels:
if the areas to which two adjacent points belong are different, and dist > max_dist, it is indicated that the two points do not belong to the same area. If the areas of two adjacent points are different, but dist is less than or equal to max_dist, merging the two points into the same area, maintaining a father array, and correcting the number of the area of the adjacent pixel point to the number of the area of the current pixel point;
the step 2.4 is as follows: and forming a boundary frame to approximately represent the whole region according to the minimum abscissa, the maximum abscissa, the minimum ordinate and the maximum ordinate of each region after the segmentation.
And 3, searching in the whole graph by using a differential evolution algorithm to obtain an optimal modification area.
The step 3 includes:
step 3.1: searching for modification area alternative modification points on the whole picture by using a differential evolution algorithm;
step 3.2: and inquiring the area where the modification point is located according to the area number obtained by image segmentation, and taking the area as an accurate search area.
Further, the step 3 includes:
step 3.1: establishing a population structure, iteration times and population size:
in the scheme, the population structure of differential evolution is single-pixel modification, a 5-dimensional vector is defined, and the modification of a certain point on a picture is [ x, y, R, G, B ], which respectively represents R, G, B values corresponding to each modification point on an abscissa, an ordinate and a pixel point in the image;
the problem is translated into an optimization problem, i.e. in the solution space of all possible modifications, a certain suitable modification is found such that the sum of the confidence levels of the current category to be modified is minimized:
minconf(pic,A,r)
s.t.0≤x≤N,0≤y≤M,0≤(R,G,B)≤255
wherein A is a modification object, N is the length of a picture, r is modification, M is the width of the picture, and R, G, B are three channels of pixel points respectively;
step 3.2, initializing population:
{X i (0)|0≤X i,1 (0)≤N,0≤X i,2 (0)≤M,0≤X i,3 (0)≤255,0≤X i,4 (0)≤255,i=1,2,...np}
wherein X is i (0) The ith "chromosome" of the 0 th generation of the population is a [ x, y, R, G, B ]]Np is the population size, i.e. the number of individuals performing a differential evolutionary search.
The ith chromosome is taken, and the jth gene value is taken in the following way:
wherein X is i,j (0) The j-th gene representing the i-th chromosome of the 0 th generation of the population,x represents i,j The lower limit of the value of X is the minimum value of column j, +.>X represents i,j The value of (2) is the maximum value of the j-th row of X;
step 3.3: variation operation, randomly selecting 3 different individuals p from population 1 ,p 2 ,p 3 The generated variance vector is:
V i (g+1)=X p1 (g)+F*(X p2 (g)-X p3 (g)),
s.t.i≠p 1 ≠p 2 ≠p 3
wherein F is a scaling factor, X i (g) Representing the ith individual in the g generation population;
if V is generated i (g+1) does not satisfy the boundary condition:
x is more than or equal to 0 and less than or equal to N, y is more than or equal to 0 and less than or equal to M, and (R, G, B) is more than or equal to 0 and less than or equal to 255, and regenerating a new individual by using the method of the step 3.2;
step 3.4: and (3) performing crossover operation to obtain a crossover matrix:
where CR is the crossover probability, j rand Is in [0, M]A random integer generated on the upper part;
step 3.5: selecting, namely reserving chromosomes with more confidence level reduced output by the model. Chromosome u represented by column i of cross matrix i (g+1) more efficient perturbation of model confidence, chromosome u is preserved in the next generation population i (g+1) otherwise, chromosome X in the original population is retained i (g):
Inquiring the modification effect from the model to obtain the confidence coefficient sum of the modified disturbance target, and reserving a modification mode with better disturbance effect and lower model confidence coefficient;
step 3.6: repeating the steps 3.3-3.5 according to the iteration times, wherein the range of the boundary conditions is more and more accurate along with the execution of the algorithm;
step 3.7: and obtaining the coordinates of a modification point through a differential evolution algorithm: x is X area_best The area where the modification point is located is used as an accurate search area best =find_father(X area_best )。
The scheme uses a differential evolution algorithm, and steps 3.1 to 3.5 can generate a preliminary modification point. But the accuracy is not high enough due to the search over the full map, and is likely not the optimal solution. Thus, in step 4, a more accurate search is performed in the area where the preliminary modification point is located.
The object of the present application, i.e. the object of the countermeasure training, may be any detected object in the picture. The training object can select, namely, realize the confidence enhancement to the target of the specified type, rather than modifying the whole graph, so as to avoid the obvious reduction of the detection confidence of other targets caused by disturbance of the countermeasure training. The gray box is characterized in that the internal structure of the deep convolutional neural network is not required to be known, and the confidence, the position and the discrimination type of each target after the target is detected through the neural network are only required to be known.
The technology not mentioned in the present invention refers to the prior art.
The invention relates to a method for generating a countermeasure sample of a gray box which is selectable by a multi-target training object, comprising six stages of establishing a countermeasure target, dividing an image, searching an optimal modification area, accurately modifying, eliminating useless modification points and performing countermeasure training. Firstly, a trainer establishes a target needing to enhance the confidence through naked eyes; the second stage, image segmentation is carried out by utilizing the thought of merging and searching, and a boundary box of a plurality of areas is generated; the third stage, searching the optimal modification area in the whole picture; searching an optimal modification point in the optimal modification area, and enabling a modification effect to act on the original picture to update the original picture; fifthly, eliminating unnecessary modification points after the modification effect is realized, and generating a countermeasure sample; finally, performing countermeasure training through the generated countermeasure sample to achieve detection confidence improvement of the selected target; because the disturbance in the countermeasure sample generated by the method is only aimed at the specific target, and the disturbance to other targets in the picture is relatively small, the robustness of the model to the classification of the specific target can be improved, and the accuracy of detection of other targets can be ensured as much as possible. Meanwhile, the whole image is not disturbed, so that the number of modified pixels is small, and the contrast sample is very close to the original image. In real life, pixel points on a picture are damaged due to the reasons that a sensor is out of order, a lens is polluted and damaged, and the like, and the damaged pixel points possibly cause failure in detecting a picture target.
Drawings
FIG. 1 is a flow chart of a method of generating challenge samples for a multi-target training object selectable gray box of the present invention.
Fig. 2 is an illustration of an image segmentation algorithm used in the present invention.
FIG. 3 is an illustration of a differential evolution algorithm used by the present invention.
Fig. 4 is an artwork of a training set and weak challenge samples in the challenge sample generation process.
FIG. 5a is a single target picture in the test set and its graph of detection effect on the yolov5s model without challenge training and on the yolov5s model after challenge training.
FIG. 5b is a graph of a single target image of a test set after random perturbation and its detection effect on the yolov5s model without challenge training and on the yolov5s model after challenge training.
FIG. 5c is a multi-objective photograph in the test set and its graph of detection effect on the yolov5s model without challenge training and on the yolov5s model after challenge training.
FIG. 5d is a graph of test set multi-objective pictures after random perturbation and their detection effect on the yolov5s model without challenge training and on the yolov5s model after challenge training.
Detailed Description
For a better understanding of the present invention, the following examples are further illustrated, but are not limited to the following examples.
A method for generating an countermeasure sample for a multi-target training object selectable gray box, as shown in fig. 1, comprises 6 steps: firstly, setting up a training target, then, carrying out image segmentation, respectively obtaining an optimal modification area and an optimal modification point by utilizing 2 times of differential evolution, continuously iterating, then, removing useless modification points to generate a countermeasure sample, and finally, merging the countermeasure sample into a training set to carry out countermeasure training, wherein the method specifically comprises the following steps of:
step 1: pretreatment of
The trainer sets the category which can be identified by the modification object A, A epsilon model. Using the deep convolutional neural network model to establish the confidence sum of the current category to be modified,wherein pic refers to an original picture for generating an antigen sample, and N refers to the total number of targets to be perturbed in the picture.
Step 2: image segmentation
The main purpose of this step is to divide the picture to be modified into small areas with similar colors according to the similarity of adjacent colors in the pixel point direction. If the color of a point within a region and a pixel point in a neighboring position are very close, i.e. do not exceed the maximum sustainable change value we set up, then the point and the point within the neighborhood can be considered to describe the same feature. As shown in fig. 2, through the concept of merging and searching, a father array is maintained, pixel points are queried through the father array, and regions are merged through modifying the father array, which specifically comprises the following steps:
step 2.1: establishing the maximum change value max_dist that can be borne by adjacent points in the area
Step 2.2: the entire image is processed using a laplace convolution. This reduces noise in the picture as well as preserving the characteristics of the image edges.
Step 2.3: defining a father array to represent the directly associated parent region number of each region, initializing the father array, wherein the father array cannot have repeated elements in the initialization process, and each pixel point in the image represents one region. And searching the father array through recursion find_father operation, and finding the area number of a certain area finally belonged to. Updating the father array after inquiring by utilizing path compression, and directly taking the final belonged area number of the area as the directly related father area number of the current area.
Step 2.4: calculating the change degree of the current point in the four directions and adjacent points, wherein the change degree is measured by the Euclidean distance of R, G, B values of the two points, and R nowpoint R is the R value of the current point pixel, R neighborpoint R value, G for adjacent point pixels nowpoint G is the G value of the current point pixel, G neighborpoint G value, B for adjacent point pixels nowpoint B is the value of B of the current point pixel, B neighborpoint B value for neighboring point pixels:
if the areas to which adjacent 2 points belong are different, and dist > max_dist, it is explained that 2 points do not belong to the same area. Otherwise, if the areas to which the adjacent 2 points belong are different, merging the two points into the same area, maintaining the father array, and correcting the number with the merged area to be the number of the target area.
Step 2.5: and forming a boundary box representing the whole region according to the minimum abscissa, the maximum abscissa, the minimum ordinate and the maximum ordinate of each region.
Step 3: differential evolution search modification region
According to the scheme, a differential evolution algorithm is used, as shown in fig. 3, through initial population generation, population variation, population crossing and query model, individual selection is carried out according to the confidence level of a target sample, and in the iterative process, the boundary condition is continuously reduced, so that the purpose of finding a global optimal value is achieved; steps 3.1 to 3.5 enable the generation of a preliminary modification point, step 3 comprising in particular the steps of:
step 3.1: and establishing a population structure, iteration times and population size.
In this scheme, the population structure of differential evolution is a single pixel modification, a 5-dimensional vector, we define the modification at a point on the picture as [ x, y, R, G, B ]. Representing the corresponding R, G, B values of each modification point on the abscissa and the ordinate of the image and the pixel point.
We turn the problem into an optimization problem, i.e. find some suitable modification in the solution space of all possible modifications, such that the sum of the confidence levels of the current class to be modified is minimal:
minconf(pic,A,r)
s.t.0≤x≤N,0≤y≤M,0≤(R,G,B)≤255
wherein A is a modification object, N is the length of the picture, r is modification, M is the width of the picture, and R, G, B are three channels of the pixel point respectively.
Step 3.2, initializing the population.
{X i (0)|0≤X i,1 (0)≤N,0≤X i,2 (0)≤M,0≤X i,3 (0)≤255,0≤X i,4 (0)≤255,i=1,2,...np}
Wherein X is i (0) The ith "chromosome" of the 0 th generation of the population is a [ x, y, R, G, B ]]Is a 5-dimensional vector of (c). np is the population size, i.e. the number of individuals performing the differential evolution search.
The ith chromosome is taken, and the jth gene value is taken in the following way:
wherein X is i,j The j-th gene representing the i-th chromosome of the 0 th generation of the population,x represents i,j The lower limit of the value of X is the minimum value of column j, +.>X represents i,j The upper bound of the value of (2) is the maximum value of the j-th column X.
Step 3.3: and (5) performing mutation operation. Random selection of 3 different individuals p from a population 1 ,p 2 ,p 3 Generates a variation vector as
s.t.i ≠p 1 ≠ p 2 ≠ p 3
F is a scaling factor, X i (g) Represents the ith individual in the g generation population.
If V is generated i (g+1) does not satisfy the boundary condition:
0≤x≤N,0≤y≤M,0≤(R,G,B)≤255
then a new "individual" is regenerated by the method of step 3.2 "
Step 3.4: and (3) performing crossover operation to obtain a crossover matrix:
where CR is the crossover probability, j rand Is in [0, M]A random integer generated on the upper part;
step 3.5: selecting, namely reserving chromosomes with more confidence level reduced output by the model. Chromosome u represented by column i of cross matrix i (g+1) more efficient perturbation of model confidence, chromosome u is preserved in the next generation population i (g+1) otherwise, chromosome X in the original population is retained i (g)
And inquiring the modification effect from the model to obtain the confidence coefficient sum of the modified disturbance target, and reserving an individual with better disturbance effect, lower target confidence coefficient sum and lower model confidence coefficient modification mode.
Step 3.6: according to the iteration times, the steps 3.3-3.5 are repeated, and the range of the boundary conditions is more and more accurate along with the execution of the algorithm.
Step 3.7: and obtaining the coordinates of a modification point through a differential evolution algorithm: x is X area_best We regard the area where this modification point is located as the precise search area best =find_father(X area_best )。
Step 4: differential evolution accurate search modification point
Step 4.1: this step is similar to step 3, but the search range is changed from the whole picture pic to the accurate search area best
Step 4.2: obtaining an accurate searching modification point X through a differential evolution algorithm best The modification of this point is effected directly on the original pic' and the modified map is taken as the original pic for the next perturbation next =pic'。
Step 4.3: and recording the coordinates of the optimal modification points, the R, G, B channel modification mode and the target confidence coefficient total descending degree, updating the target confidence coefficient total upper limit, and taking the current optimal modification point as an alternative modification point.
Step 4.4: and (3) repeating the step until the disturbance effect reaches an ideal state, wherein the total target confidence coefficient is 0, and all the disturbed targets are not correctly detected or the detection and classification fails.
Step 5, eliminating unnecessary alternative points
Step 5.1: and sorting from small to large for each optional modification point according to the descending degree of the total confidence.
Step 5.2: and restoring pictures of the optional modification points in sequence according to the descending order of the total confidence coefficient, obtaining the restored confidence coefficient through the model, and restoring the modification of the optional modification points on the original picture if the disturbance effect is not affected.
Step 6: countermeasure training
The challenge samples are incorporated into a training set for challenge training. As shown in fig. 4, for the artwork of the training set and the generated challenge sample, the challenge sample is generated on the basis of the artwork, and the challenge sample performs challenge training, so that the robustness of the model is improved.
After the countermeasure training is carried out, the model has the following performance improvement, the confidence coefficient of the display threshold value of the target detection in the experiment is 0.25, and the severe environment is simulated by carrying out random pixel point disturbance on the picture:
1. the model is in a normal environment (the picture is not disturbed), and under a single target, the accuracy of model target detection is improved, and the effect is shown in fig. 5 a;
2. under a harsher environment (the picture is disturbed by random pixel points), the accuracy of model target detection is improved under a single target, and the effect is shown in fig. 5 b;
3. the model is in a normal environment (the picture is not disturbed), and under the condition of multiple targets, the accuracy of model target detection is improved, and the effect is shown in fig. 5 c;
4. under the severe environment (the picture is disturbed by random pixel points), the accuracy of model target detection is improved under multiple targets, and the effect is shown in fig. 5 d.

Claims (6)

1. A method for generating an countermeasure sample for a multi-target training object selectable gray box is characterized by comprising the following steps: the method comprises the following steps:
step 1: pretreatment: setting a modification object, and setting the number of differential evolution iterations and the population size;
step 2: image segmentation: the main purpose is to divide a large image into a plurality of small areas with similar colors according to the similarity of adjacent colors in the pixel point direction of the picture to be modified;
step 3: searching the best modified region by using a differential evolution algorithm: if the picture itself is searched directly, the search range is relatively large, so the main objective is to find the optimal modification area to determine the range for finer modification, including:
step 3.1: establishing a population structure, iteration times and population size:
the modification method of the optimal pixel point is found by the differential evolution algorithm, so that the population structure, a vector of 5 dimensions, is defined as [ x, y, R, G, B ] of a certain point on the picture, and represents the corresponding R, G, B values of each modification point on the abscissa and ordinate in the image and the pixel point respectively;
the problem is converted into an optimization problem, namely, a certain proper modification is searched in the solution space of all possible modifications, so that the confidence sum of the current category to be modified is minimum:
minconf(pic,A,r)
s.t.0≤x≤N,0≤y≤M,0≤(R,G,B)≤255
wherein A is a modification object, N is the length of a picture, r is modification, M is the width of the picture, R, G, B is three channels of pixel points respectively, and pic is the picture;
step 3.2, initializing population:
{X i (0)|0≤X i,1 (0)≤N,0≤X i,2 (0)≤M,0≤X i,3 (0)≤255,0≤X i,4 (0)≤255,i=1,2,...np}
wherein X is i (0) The ith "chromosome" of the 0 th generation of the population is a [ x, y, R, G, B ]]Np is population size, i.e. the number of individuals performing differential evolution search;
the ith chromosome is taken, and the jth gene value is taken in the following way:
wherein X is i,j (0) The j-th gene representing the i-th chromosome of the 0 th generation of the population,x represents i,j The lower limit of the value of X is the minimum value of column j, +.>X represents i,j The value of (2) is the maximum value of the j-th row of X;
step 3.3: variation operation, randomly selecting 3 different individuals p from population 1 ,p 2 ,p 3 The generated variance vector is:
V i (g+1)=X p1 (g)+F*(X p2 (g)-X p3 (g)),
s.t.i≠p 1 ≠p 2 ≠p 3
wherein F is a scaling factor, X i (g) Representing the ith individual in the g generation population;
if V is generated i (g+1) does not satisfy the boundary condition:
x is more than or equal to 0 and less than or equal to N, y is more than or equal to 0 and less than or equal to M, and (R, G, B) is more than or equal to 0 and less than or equal to 255, and regenerating a new individual by using the method of the step 3.2;
step 3.4: and (3) performing crossover operation to obtain a crossover matrix:
where CR is the crossover probability, j rand Is in [0, M]A random integer generated on the upper part;
step 3.5: selecting, retaining the chromosome which reduces the confidence of the model output more, if the chromosome u represented by the ith column of the cross matrix i (g+1) more efficient perturbation of model confidence, chromosome u is preserved in the next generation population i (g+1) otherwise, chromosome X in the original population is retained i (g):
Inquiring the modification effect from the model to obtain the confidence coefficient sum of the modified disturbance target, and reserving a modification mode with better disturbance effect and lower model confidence coefficient;
step 3.6: repeating the steps 3.3-3.5 according to the iteration times, and gradually approaching to the optimal modification mode along with the execution of the algorithm;
step 3.7: and obtaining the coordinates of a modification point through a differential evolution algorithm: x is X area_best The area where the modification point is located is used as an accurate search area best =find_father(X area_best );
Step 4: differential evolution precisely finds the modification point:
step 4.1: this step is similar to step 3, but the search range is changed from the whole picture pic to the accurate search area best
Step 4.2: obtaining an accurate check through a differential evolution algorithmFind modification point X best The modification of this point is effected directly on the original pic and the modified pic' is taken as the original pic for the next perturbation next =pic';
Step 4.3: recording the coordinates of the optimal modification points, the R, G, B channel modification mode and the reduction degree of the target confidence coefficient sum, updating the upper limit of the target confidence coefficient sum, and taking the current optimal modification point as an alternative modification point;
step 4.4: repeating the step 3 until the disturbance effect reaches an ideal state, wherein the total target confidence coefficient is 0, and all disturbed targets are not detected correctly or failed in detection and classification;
and 5, eliminating unnecessary alternative points:
step 5.1: sorting from small to large for each optional modification point according to the descending degree of the target confidence coefficient sum;
step 5.2: sequentially restoring pictures of the optional modification points according to the descending sequence of the target confidence coefficient sum, obtaining the recovered confidence coefficient through the model, and recovering modification on the optional modification points on the original picture if the disturbance effect is not affected;
step 6: and (3) countertraining, namely merging the countermeasures into a training set to perform countermeasures.
2. The method of generating a challenge sample for a multi-target training object selectable gray box of claim 1, wherein: in step 1, the trainer sets the target type with enhanced confidence, and establishes the confidence sum of the current category to be modified and the confidence of other categories by inputting the original picture on the model.
3. The method of generating a challenge sample for a multi-target training object selectable gray box of claim 2, wherein: in step 1, a trainer sets the category which can be identified by the modified object A, A epsilon model, utilizes the deep convolutional neural network model to establish the confidence sum of the category to be modified currently,wherein pic refers to an original picture for generating an antigen sample, and n refers to the total number of targets to be perturbed in the picture.
4. A method of generating a challenge sample for a multi-target training object selectable gray box according to any of claims 1-3, wherein: step 2 comprises the following steps:
step 2.1: establishing a maximum change value max_dist which can be born by adjacent pixel points in the area;
step 2.2: in order to reduce noise in the picture and maintain the characteristics of the image edge, the whole image is processed by utilizing Gaussian Laplace convolution;
step 2.3: image segmentation is carried out under the constraint condition that adjacent points can bear the maximum change value by utilizing the concept of merging and searching;
step 2.4: a bounding box is formed for each segmented region representing the entire region.
5. The method for generating a challenge sample for a multi-target training object selectable gray box of claim 4, wherein: step 2.3 is: defining a father array to represent a directly-related father area number of each area, initializing the father array, wherein in the initialization process, the father array cannot have repeated elements, at the moment, each pixel point in an image represents an area, searching the father array through recursively find-father operation, finding out the area number which a certain area finally belongs to, updating the father array after inquiring by utilizing path compression, and directly taking the finally-related area number of the area as the directly-related father area number of the current area;
calculating the change degree of the current point in the four directions and adjacent points, wherein the change degree is measured by the Euclidean distance of R, G, B values of the two points, and R nowpoint R is the R value of the current point pixel, R neighborpoint R value, G for adjacent point pixels nowpoint G is the G value of the current point pixel, G neighborpoint G value, B for adjacent point pixels nowpoint B is the value of B of the current point pixel, B neighborpoint B value for neighboring point pixels:
if the areas of two adjacent points are different, and dist > max_dist indicates that the two points do not belong to the same area, otherwise, if the areas of two adjacent points are different, but dist is less than or equal to max_dist, merging the two points to the same area, maintaining a father array, and correcting the parent area number directly related to the adjacent pixel point to be the parent area number directly related to the current pixel point;
step 2.4 is: and forming a boundary frame to approximately represent the whole region according to the minimum abscissa, the maximum abscissa, the minimum ordinate and the maximum ordinate of each region after the segmentation.
6. A method of generating a challenge sample for a multi-target training object selectable gray box according to any of claims 1-3, wherein: and 3, searching in the whole graph by using a differential evolution algorithm to obtain an optimal modification area.
CN202111465982.8A 2021-12-03 2021-12-03 Multi-target training object-oriented selectable gray box countermeasure sample generation method Active CN114139631B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111465982.8A CN114139631B (en) 2021-12-03 2021-12-03 Multi-target training object-oriented selectable gray box countermeasure sample generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111465982.8A CN114139631B (en) 2021-12-03 2021-12-03 Multi-target training object-oriented selectable gray box countermeasure sample generation method

Publications (2)

Publication Number Publication Date
CN114139631A CN114139631A (en) 2022-03-04
CN114139631B true CN114139631B (en) 2023-07-28

Family

ID=80387657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111465982.8A Active CN114139631B (en) 2021-12-03 2021-12-03 Multi-target training object-oriented selectable gray box countermeasure sample generation method

Country Status (1)

Country Link
CN (1) CN114139631B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419453B (en) * 2022-04-01 2022-07-01 中国人民解放军火箭军工程大学 Group target detection method based on electromagnetic scattering characteristics and topological configuration
CN116542468B (en) * 2023-05-06 2023-10-20 中国人民解放军32370部队 Unmanned aerial vehicle cluster task planning method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10592787B2 (en) * 2017-11-08 2020-03-17 Adobe Inc. Font recognition using adversarial neural network training
US11568324B2 (en) * 2018-12-20 2023-01-31 Samsung Display Co., Ltd. Adversarial training method for noisy labels
CN109961145B (en) * 2018-12-21 2020-11-13 北京理工大学 Antagonistic sample generation method for image recognition model classification boundary sensitivity
CN110097185B (en) * 2019-03-29 2021-03-23 北京大学 Optimization model method based on generation of countermeasure network and application
CN112311733A (en) * 2019-07-30 2021-02-02 四川大学 Method for preventing attack counterattack based on reinforcement learning optimization XSS detection model
CN112949678B (en) * 2021-01-14 2023-05-02 西安交通大学 Deep learning model countermeasure sample generation method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN114139631A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN108830188B (en) Vehicle detection method based on deep learning
CN108564129B (en) Trajectory data classification method based on generation countermeasure network
Baró et al. Traffic sign recognition using evolutionary adaboost detection and forest-ECOC classification
CN108647583B (en) Face recognition algorithm training method based on multi-target learning
US10262214B1 (en) Learning method, learning device for detecting lane by using CNN and testing method, testing device using the same
Ghosh et al. Integration of Gibbs Markov random field and Hopfield-type neural networks for unsupervised change detection in remotely sensed multitemporal images
CN107633226B (en) Human body motion tracking feature processing method
CN114139631B (en) Multi-target training object-oriented selectable gray box countermeasure sample generation method
CN108492298B (en) Multispectral image change detection method based on generation countermeasure network
CN110929848B (en) Training and tracking method based on multi-challenge perception learning model
US10275667B1 (en) Learning method, learning device for detecting lane through lane model and testing method, testing device using the same
CN109033978B (en) Error correction strategy-based CNN-SVM hybrid model gesture recognition method
CN112150493A (en) Semantic guidance-based screen area detection method in natural scene
CN108230330B (en) Method for quickly segmenting highway pavement and positioning camera
CN112906770A (en) Cross-modal fusion-based deep clustering method and system
CN113408605A (en) Hyperspectral image semi-supervised classification method based on small sample learning
CN115410088B (en) Hyperspectral image field self-adaption method based on virtual classifier
CN111738055A (en) Multi-class text detection system and bill form detection method based on same
Zheng et al. Improvement of grayscale image 2D maximum entropy threshold segmentation method
CN115311449A (en) Weak supervision image target positioning analysis system based on class reactivation mapping chart
Ghadhban et al. Segments interpolation extractor for finding the best fit line in Arabic offline handwriting recognition words
CN111582057B (en) Face verification method based on local receptive field
CN113496480A (en) Method for detecting weld image defects
CN112509017A (en) Remote sensing image change detection method based on learnable difference algorithm
Zhang et al. Robust road detection from a single image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant