CN115409705A - Countermeasure sample generation method for SAR image target identification model - Google Patents
Countermeasure sample generation method for SAR image target identification model Download PDFInfo
- Publication number
- CN115409705A CN115409705A CN202211027951.9A CN202211027951A CN115409705A CN 115409705 A CN115409705 A CN 115409705A CN 202211027951 A CN202211027951 A CN 202211027951A CN 115409705 A CN115409705 A CN 115409705A
- Authority
- CN
- China
- Prior art keywords
- scattering center
- sample
- pixel
- image
- attribute
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4007—Interpolation-based scaling, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
Abstract
The invention provides a confrontation sample generation method for an SAR image target recognition model, which comprises the following steps: obtaining a reconstructed image based on the attribute scattering center; generating an initial challenge sample; obtaining the offset of each pixel point in the confrontation sample; calculating to obtain an iterated confrontation sample; obtaining an output classification result; and (6) judging. According to the method, the antagonistic sample of the SAR image is generated by introducing the attribute scattering center model (namely, the attribute scattering center model is effectively introduced as a medium of an image domain and an echo domain), the antagonistic sample is generated by adopting a brand-new spatial transformation idea, instead of generating the antagonistic sample by changing the original pixel point of the image in the prior art, the method for generating the antagonistic sample has high success rate, the antagonistic sample generated based on the guidance of the reconstructed image of the attribute scattering center is focused on a strong scattering structure in the SAR image, and the strong scattering point of the generated antagonistic sample is obviously changed and is obviously superior to the prior art.
Description
Technical Field
The invention relates to the technical field of artificial intelligence safety, in particular to a confrontation sample generation method for an SAR image target recognition model.
Background
Synthetic Aperture Radar (SAR) automatic target identification is widely used in military and civilian fields.
With the development of deep learning technology, the performance of SAR image target identification task is greatly improved by a deep learning model, especially a convolutional neural network. Deep learning, however, presents serious vulnerability counter issues. For example, a small perturbation to the input image may cause a classification model with good training results to predict the perturbed image as a completely wrong class. Researchers attribute this phenomenon to a linear representation of high-dimensional features in deep learning models and name this phenomenon as a counterattack. Due to the fact that the SAR image data set is small in scale, network training is easy to over-fit, the robustness of the learned features is poor, and a deep learning model of the SAR image has large attack resistance.
At present, the mainstream anti-attack technology is mainly divided into an optimization class and a generation class:
an optimized confrontation sample generation method includes setting an optimization target and a measurement function, generating confrontation samples meeting the optimization target according to an optimization algorithm iteration mode, and selecting L in the method p The norm (p =1,2, infinity) is taken as a measurement function, and a gradient descent method, an alternating direction multiplier method and a heuristic algorithm are taken as an iterative optimization algorithm.
The method for generating the similar confrontation sample applies end-to-end generation confrontation network to learn the mapping between the input original sample and the disturbed sample, and the given original sample is input into the network to output the disturbed sample.
The method for resisting attack aiming at the natural image classification model is mainly provided according to the characteristics of a convolutional neural network, and the following steps are found: calculating a forward derivative and constructing a saliency map by a JSMA algorithm according to the high-dimensional characteristic linear expression characteristic of the convolutional neural network, and selecting a pixel point with the largest influence after change for disturbance; the SparseFool algorithm utilizes the low mean curvature of the CNN decision boundary to minimize the challenge loss and the decision boundary distance loss, generating high quality challenge samples.
Due to the unique imaging characteristics, the SAR image basic expression unit is different from the fact that the optical image basic expression unit is a pixel point, the SAR image basic expression unit is a resolution unit which is determined by the distance resolution and the azimuth resolution together, and the fitting degree of the disturbance performed aiming at the pixel point on an SAR imaging mechanism is poor. At present, researches on a countermeasure sample generation method of an SAR image target identification model mainly focus on changes of pixel values of image domain pixels, main semantic features of an SAR target comprise strong scattering points, target outlines, electromagnetic scattering features and the like, the semantic features are represented by relative relations among multiple pixels, and the changes of the pixels in the SAR image have small influence on the features. Because the traditional SAR target identification method is proposed based on the semantic features, and the multi-pixel anti-disturbance which can change the semantic features conforms to the basic characteristics of the SAR image resolution unit, the anti-disturbance can try to simultaneously interfere with the convolutional neural model and the traditional model of the target identification, and the realization of a physical domain is more likely to be completed.
In view of the above, there is a need for a method for generating a countermeasure sample for an SAR image to solve the problems in the prior art.
Disclosure of Invention
The invention aims to provide a method for generating a countermeasure sample facing an SAR image target identification model, which introduces an attribute scattering center model to generate a countermeasure sample of an SAR image (namely, the attribute scattering center model is effectively introduced as a medium of an image domain and an echo domain), adopts a brand-new spatial transformation idea to generate the countermeasure sample instead of changing original pixel points of the image to generate the countermeasure sample in the prior art, and has the advantages that the success rate of the method for generating the countermeasure sample is high, the countermeasure sample generated based on the guidance of a reconstructed image of the attribute scattering center is focused on a strong scattering structure in the SAR image, and the strong scattering point of the generated countermeasure sample is obviously changed and is obviously superior to the prior art.
The invention adopts the following specific technical scheme:
a confrontation sample generation method for an SAR image target recognition model comprises the following steps:
s1, inputting an original image x, and extracting an attribute scattering center by using a genetic algorithm to obtain an attribute scattering center parameter set; reconstructing the attribute scattering center to obtain a reconstructed image x' based on the attribute scattering center;
step S2, generating an initial confrontation sample, namely x adv = x, setting the maximum iteration number Q, and initializing the current iteration number i to be 1;
s3, calculating the offset f (n, m) of each pixel point in the countermeasure sample by using an L-BFGS algorithm minimum loss function according to the countermeasure sample;
s4, calculating to obtain an iterated confrontation sample x according to the offset f (n, m) of each pixel point in the confrontation sample obtained in the step S3 adv ;
Step S5, the iterated confrontation sample x obtained in the step S4 adv Inputting the input data into a classifier H (x) to obtain an output classification result, and taking i = i +1;
and S6, judging, specifically: if the output classification result is not equal to the original class label or i is larger than Q, outputting the generated countermeasure sample x adv (ii) a Otherwise, the procedure returns to step S3.
Preferably, the step S1 includes the steps of:
step S1.1, obtaining an attribute scattering center parameter set theta, specifically comprising the following steps:
step (1), carrying out Otsu threshold segmentation on an original image to obtain a target area image;
step (2), segmenting an area above-20 dB in a target area, and selecting a plurality of parts needing parameter estimation;
and (3) circularly estimating the parameters of each component by using a genetic algorithm to obtain the following attribute scattering center parameter set:
wherein:parameter representing the i-th attribute scattering center, A i Is the amplitude of the scattering center response, alpha i Characterizing the frequency dependence of the scattering center response, x i And y i The position of the scattering center in the distance direction and the azimuth direction, L i The length of the scattering center of the distribution is characterized,characterizing the azimuth angle, gamma, of a distributed scattering center relative to a SAR sensor i Representing the dependence of scattering response of local scattering centers on azimuth angles, wherein q is the number of attribute scattering centers;
and S1.2, reconstructing the attribute scattering center to obtain a reconstructed image x' based on the attribute scattering center.
Preferably, step S1.2 specifically comprises the steps of:
step (1), calculating the response E of each attribute scattering center by adopting the following formula i :
Wherein: j is an imaginary unit, f is frequency, f c Is the center frequency of the radar signal and,is the azimuth angle, c is the wave transmission speed; sinc is a sampling function;
step (2), E of each attribute scattering center i Adding to obtain the total response E of the scattering centers with various attributes;
designing-35 dB Taylor windows in the x direction and the y direction of the E, and designing a filter obtained by multiplying the Taylor windows in the x direction and the y direction of an attribute scattering center imaging result E;
and (4) zero filling 0 to a set size, usually the size of an original picture, for the filtering result, and obtaining a reconstructed image x' based on the attribute scattering center through 2D-IFFT.
wherein:representing the countermeasure loss to limit the samples for which the countermeasure was successfully generated, the expression of which is as follows:x' is a candidate generation countermeasure sample, t is a class label of the countermeasure sample to be generated, i is other class labels than t,in order to generate the corresponding value of the category i in the logits after the resisting sample x' is input into the model to be attacked,inputting a countercheck sample x' into a model to be attacked and outputting a value corresponding to the category t in the logits; the parameter kappa is used for setting the attack strength;
representing the reconstruction loss to generate a locally smoothed image, expressed as follows:h and w are the height and width of the image respectively,is the four-neighbor domain of pixel (n, m), (u, v) is the coordinate of the four-neighbor domain of pixel (n, m), (Δ u) (n) ,Δv (m) ) Is the offset of the pixel (n, m), (Δ u) (u) ,Δv (v) ) Is the offset of the pixel (u, v);
the parameter λ balances the effect of the two loss functions.
Preferably, the step S4 includes the following steps:
step S4.1, initializing a pixel point (n, m) = (0, 0);
step S4.2, if the value of a pixel point (n, m) in the reconstructed image based on the attribute scattering center is 0, f (n, m) = (0, 0), otherwise, f (n, m) is not changed;
step S4.3, calculating the position (u, v) of the pixel point (n, m) after the offset, that is, (u, v) = (n, m) + f (n, m);
step S4.4, calculating the pixel value V of the position after the bilinear interpolation of the pixel point (n, m) nm :
step 4.5, judging whether m is smaller than w, if so, then m = m +1, and returning to the step 4.2; if not, further judging whether n is smaller than h, if so, n = n +1, returning to the step S4.2, otherwise, taking the pixel value set of all the pixel points as the confrontation sample x after iteration adv And the next step is carried out.
In addition to the above-described objects, features and advantages, the present invention has other objects, features and advantages. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic diagram of the results of the attack countermeasure of one sample in class 2S1 in the MSTAR dataset in the preferred embodiment 1 of the present invention, wherein: fig. 1 (a) is an original unperturbed sample, and fig. 1 (b) to (f) are experimental results of this sample for a non-directional countermeasure sample of the ResNet18 network under the proposed methods ASC-STA, C & W, deepFool, JSMA, and SparseFool algorithms, respectively, where: the first action is a countermeasure sample, and the second action is a corresponding disturbance amount schematic diagram;
FIG. 2 is a graph of the corresponding strong scatter point extraction results for the original sample and the generated challenge sample of FIG. 1, wherein: FIG. 2 (a) is the result of extracting strong scattering sites from the original sample in FIG. 1; 2 (b) to (f) respectively show the strong scattering point extraction results of the sample generated by the proposed method ASC-STA, C & W, deepFool, JSMA and SparseFool for the ResNet18 network;
fig. 3 is a result of using the improved shape context descriptor to evaluate the average degree of change in strong scattering points before and after perturbation of MSTAR dataset samples under the ResNet18 network.
Detailed Description
Embodiments of the invention will be described in detail below with reference to the drawings, but the invention can be implemented in many different ways as defined and covered by the claims.
Example (b):
a method for generating confrontation samples for an SAR image target recognition model comprises the following steps:
s1, inputting an original image x, and extracting an attribute scattering center by using a genetic algorithm to obtain an attribute scattering center parameter set; the method for reconstructing the attribute scattering center to obtain a reconstructed image x' based on the attribute scattering center specifically comprises the following steps:
step S1.1, obtaining an attribute scattering center parameter set theta, specifically comprising the following steps:
step (1), carrying out Otsu threshold segmentation on an original image to obtain a target area image;
step (2), dividing the region above-20 dB in the target region, and selecting a plurality of parts needing parameter estimation;
and (3) circularly estimating the parameters of each component by using a genetic algorithm to obtain the following attribute scattering center parameter set:
wherein:parameter representing the i-th property of the scattering center, A i Is the amplitude of the scattering center response, alpha i Characterizing the frequency dependence of the scattering center response, x i And y i The position of the scattering center in the distance direction and the azimuth direction, L i The length of the scattering center of the distribution is characterized,characterizing the azimuth angle, γ, of a distributed scattering center relative to a SAR sensor i Representing the dependence of scattering response of local scattering centers on azimuth angles, wherein q is the number of attribute scattering centers;
s1.2, reconstructing the attribute scattering center to obtain a reconstructed image x' based on the attribute scattering center, which specifically comprises the following steps:
step (1), calculating the response E of each attribute scattering center by adopting the following formula i :
Wherein: j is an imaginary unit, f is frequency, f c Is the center frequency of the radar signal and,is the azimuth angle, c is the electric wave transmission speed; sinc is a sampling function;
step (2), E of each attribute scattering center i Adding to obtain the total response E of the scattering centers with various attributes;
designing-35 dB Taylor windows in the x direction and the y direction of the E, and designing a filter obtained by multiplying the Taylor windows in the x direction and the y direction of an attribute scattering center imaging result E;
and (4) zero filling 0 to a set size, usually the size of an original picture, for the filtering result, and obtaining a reconstructed image x' based on the attribute scattering center through 2D-IFFT.
Step S2, generating an initial confrontation sample, namely x adv = x, a maximum number of iterations Q is set (preferably 1000), and the current number of iterations i is initialized to 1:
s3, calculating the offset f (n, m) of each pixel point in the countermeasure sample by using an L-BFGS algorithm minimum loss function according to the countermeasure sample, wherein the L-BFGS algorithm minimum loss functionAs follows:
wherein:representing the countermeasure loss to limit the samples for which the countermeasure was successfully generated, the expression of which is as follows:x' is a candidate confrontation sample to be generated, t is a category of the confrontation sample to be generatedLabels, i is other category labels than t,in order to generate the corresponding value of the category i in the logits after the resisting sample x' is input into the model to be attacked,inputting a countercheck sample x' into a model to be attacked and outputting a value corresponding to the category t in the logits; the parameter κ is used to set the strength of the attack, and is preferably set to 0 here;
representing the reconstruction loss to generate a locally smoothed image, expressed as follows:h and w are the height and width of the image respectively,is the four-neighbor domain of pixel (n, m), (u, v) is the coordinates of the four-neighbor domain of pixel (n, m), (Δ u) (n) ,Δv (m) ) Is the offset of the pixel (n, m), (Δ u) (u) ,Δv (v) ) Is the offset of the pixel (u, v);
the parameter λ balances the effect of the two loss functions, here set to 0.005;
s4, calculating to obtain an iterated countermeasure sample x according to the offset f (n, m) of each pixel point in the countermeasure sample obtained in the step S3 adv The method specifically comprises the following steps:
step S4.1, initializing a pixel point (n, m) = (0, 0);
step S4.2, if the value of a pixel point (n, m) in the reconstructed image based on the attribute scattering center is 0, f (n, m) = (0, 0), otherwise, f (n, m) is unchanged;
step S4.3, calculating the position (u, v) of the pixel point (n, m) after the offset, that is, (u, v) = (n, m) + f (n, m);
step S4.4, calculating the offset of bilinear interpolation of the pixel point (n, m)The pixel value V of the latter position nm :
step 4.5, judging whether m is smaller than w, if so, then m = m +1, and returning to the step 4.2; if not, further judging whether n is smaller than h, if so, n = n +1, returning to the step S4.2, otherwise, taking the pixel value set of all the pixel points as the confrontation sample x after iteration adv Entering the next step;
step S5, the iterated confrontation sample x obtained in the step S4 adv Inputting the input data into a classifier H (x) to obtain an output classification result, and taking i = i +1;
and S6, judging, specifically: if the output classification result is not equal to the original class label or i is larger than Q, outputting the generated countermeasure sample x adv (ii) a Otherwise, the procedure returns to step S3.
In this embodiment:
1. experiments show that the MSTAR data set is widely used for verifying the performance of the SAR image target identification method, the data set comprises ten types of targets including 2S1, BMP2, BTR70, T72, T62, BRDM2, BTR60, ZSU234, D7 and ZIL131, the target has a pitch angle ranging from 0 degrees to 360 degrees, an azimuth angle ranging from 15 degrees to 17 degrees, and a picture size of 128 x 128. And selecting ResNet18 and MobileNet _ v2 networks as white-box network models to be attacked, wherein the classification accuracy is over 95 percent. The challenge methods used for comparison were DeepFool, sparseFool, C & W and JSMA. ASC-STA method parameter setting: κ is 0 and λ is 0.005. The C & W, deepFool, JSMA and SparseFool methods are all implemented via the Adversal-Robustness-ToolBox (ART) toolkit, all using program default parameters.
The effectiveness of the proposed method will be explained in more or less three aspects from the success rate of the attack, the distribution of attack-disturbing pixel points and the characteristic quantity of the attack change.
2. The success rate is defined as the proportion of the classification error samples to all the original correct classification samples after disturbance for the original clean input samples. Table 1 shows the results of comparison of attack success rates of the proposed method and the comparative method under the ResNet18 and MobileNet _ v2 models. It can be seen that the method can realize higher attack success rate, and compared with the method with excellent performance, the success rate is kept on the same level.
TABLE 1 results of comparison of attack success rates of the method of the present embodiment and the comparison method under ResNet18 and MobileNet _ v2 models
Model | ASC-STA | C&W | DeepFool | JSMA | SparseFool |
ResNet18 | 100% | 99.79% | 100% | 96.69% | 100% |
MobileNet_v2 | 100% | 100% | 100% | 100% | 99.96% |
3. Referring to fig. 1, fig. 1 is a schematic diagram of the counterattack result of a sample in 2S1 class in MSTAR dataset, where fig. 1 (a) is an original undisturbed sample, fig. 1 (b) to (f) are respectively the experimental results of the sample for the non-directional counterattack sample of the ResNet18 network under the proposed methods ASC-STA, C & W, depfool, JSMA and SparseFool algorithms, the first action is a countersample, and the second action is a disturbance amount diagram. Different from a contrast method, the disturbance amount of the method is mainly concentrated in a target area and has obvious semantic features.
4. Referring to fig. 2, fig. 2 shows the corresponding strong scattering point extraction results of the original sample and the generation countersample in fig. 1, wherein: FIG. 2 (a) is the result of extracting strong scattering sites from the original sample in FIG. 1. Fig. 2 (b) to (f) show the results of strong scattering point extraction of the sample for generating a non-directional countermeasure sample for the ResNet18 network under the proposed methods ASC-STA, C & W, deepFool, JSMA and SparseFool attacks, respectively. Compared with the phenomenon that the strong scattering point is changed by a contrast method in a small amplitude, the method greatly changes the structure of the strong scattering point of the sample, obvious structural change occurs, and the change degree of the characteristic is quantified and explained by indexes in the following.
5. Embodiments use an improved shape context descriptor to describe the degree of change in strong scattering points before and after perturbation. The shape context descriptor has good performance in describing global features, and a logarithmic polar coordinate system adopted by the descriptor can detect local small changes. Here, let s be the point feature of the original image, t be the point feature of the disturbed image, and the cost C of matching the point features of the two images s,t Given by:
wherein: g (K) is the normalized feature map of the region where the point s is located, h (K) is the normalized feature map of the region where the point t is located, and K is the dimension of the histogram. The feature points before and after disturbance are aligned through Hungarian matching algorithm, output matching pairs are considered to represent original point features and disturbance point features corresponding to the original point features, the point feature change degree is defined as the sum of matching costs of the matching pairs, and 20 equiangular regions and 10 equispaced logarithmic radius regions are set in an experiment.
Fig. 3 is a result of using the improved shape context descriptor to evaluate the average degree of change in strong scattering points before and after perturbation of MSTAR dataset samples under the ResNet18 network. Under all test model conditions, the method has strong characteristic interference capability on test data, which is two to three times of that of a comparison method, and the comparison method realizes high characteristic disturbance rate at the cost of large-range disturbance distribution on the whole image.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (5)
1. A method for generating confrontation samples for an SAR image target recognition model is characterized by comprising the following steps:
s1, inputting an original image x, and extracting an attribute scattering center by using a genetic algorithm to obtain an attribute scattering center parameter set; reconstructing the attribute scattering center to obtain a reconstructed image x' based on the attribute scattering center;
step S2, generating an initial confrontation sample, namely x adv = x, setting the maximum iteration number Q, and initializing the current iteration number i to be 1;
s3, calculating the offset f (n, m) of each pixel point in the countermeasure sample by using an L-BFGS algorithm minimum loss function according to the countermeasure sample;
s4, calculating according to the offset f (n, m) of each pixel point in the confrontation sample obtained in the step S3 to obtain the iterated confrontationSample x adv ;
Step S5, the iterated confrontation sample x obtained in the step S4 adv Input classifierObtaining an output classification result, and taking i = i +1;
and S6, judging, specifically: if the output classification result is not equal to the original class label or i is larger than Q, outputting the generated countermeasure sample x adv (ii) a Otherwise, the procedure returns to step S3.
2. The method for generating the confrontation sample facing the SAR image target recognition model according to claim 1, wherein the step S1 comprises the following steps:
step S1.1, obtaining an attribute scattering center parameter set theta, specifically comprising the following steps:
step (1), carrying out Otsu threshold segmentation on an original image to obtain a target area image;
step (2), dividing the region above-20 dB in the target region, and selecting a plurality of parts needing parameter estimation;
and (3) circularly estimating the parameters of each component by using a genetic algorithm to obtain the following attribute scattering center parameter set:
wherein:parameter representing the i-th property of the scattering center, A i Is the amplitude of the scattering center response, alpha i Characterizing the frequency dependence of the scattering center response, x i And y i The position of the scattering center in the distance direction and the azimuth direction, L i The length of the scattering center of the distribution is characterized,characterizing the azimuth angle, gamma, of a distributed scattering center relative to a SAR sensor i Representing the dependence of scattering response of local scattering centers on azimuth angles, wherein q is the number of attribute scattering centers;
and S1.2, reconstructing the attribute scattering center to obtain a reconstructed image x' based on the attribute scattering center.
3. The method for generating the confrontation sample facing the SAR image target recognition model according to claim 2, wherein the step S1.2 specifically comprises the following steps:
step (1), calculating the response E of each attribute scattering center by adopting the following formula i :
Wherein: j is an imaginary unit, f is frequency, f c Is the center frequency of the radar signal and,is the azimuth angle, c is the wave transmission speed; sinc is a sampling function;
step (2), E of each attribute scattering center i Adding to obtain the total response E of the scattering centers with various attributes;
designing-35 dB Taylor windows for the E in the x direction and the y direction, and designing a filter for multiplying the attribute scattering center imaging result E by the Taylor windows in the x direction and the y direction;
and (4) zero filling 0 to a set size, usually the size of an original picture, for the filtering result, and obtaining a reconstructed image x' based on the attribute scattering center through 2D-IFFT.
4. The method for generating the confrontation sample of the SAR image target recognition model according to claim 3, wherein in the step S3: L-BFGS algorithmMinimizing a loss functionAs follows:
wherein:representing the countermeasure loss to limit the samples for which the countermeasure was successfully generated, the expression of which is as follows:x' is a candidate confrontation sample to be generated, t is a class label of the confrontation sample to be generated, i is other class labels which are not t,in order to generate the corresponding value of the category i in the logits after the resisting sample x' is input into the model to be attacked,inputting a confrontation sample x' into a model to be attacked and then outputting a value corresponding to the category t in the logits; the parameter kappa is used for setting the attack strength;
representing the reconstruction loss to generate a locally smoothed image, expressed as follows:h and w are the height and width of the image respectively,is the four-neighbor domain of pixel (n, m), (u, v)Is the coordinate of the four neighborhoods of pixel (n, m), (Δ u) (n) ,Δv (m) ) Is the offset of the pixel (n, m), (Δ u) (u) ,Δv (v) ) Is the offset of the pixel (u, v);
the parameter λ balances the effect of the two loss functions.
5. The method for generating the confrontation sample for the SAR image target recognition model according to claim 4, wherein the step S4 comprises the following steps:
step S4.1, initializing a pixel point (n, m) = (0, 0);
step S4.2, if the value of a pixel point (n, m) in the reconstructed image based on the attribute scattering center is 0, f (n, m) = (0, 0), otherwise, f (n, m) is not changed;
step S4.3, calculating the position (u, v) of the pixel point (n, m) after the offset, that is, (u, v) = (n, m) + f (n, m);
step S4.4, calculating the pixel value V of the position after the pixel point (n, m) is subjected to bilinear interpolation nm :
step S4.5, determining whether m is smaller than w, if so, then m = m +1, and returning to step S4.2; if not, further judging whether n is smaller than h, if so, n = n +1, returning to the step S4.2, otherwise, taking the pixel value set of all the pixel points as the confrontation sample x after iteration adv And the next step is carried out.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211027951.9A CN115409705A (en) | 2022-08-25 | 2022-08-25 | Countermeasure sample generation method for SAR image target identification model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211027951.9A CN115409705A (en) | 2022-08-25 | 2022-08-25 | Countermeasure sample generation method for SAR image target identification model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115409705A true CN115409705A (en) | 2022-11-29 |
Family
ID=84162149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211027951.9A Pending CN115409705A (en) | 2022-08-25 | 2022-08-25 | Countermeasure sample generation method for SAR image target identification model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115409705A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116614189A (en) * | 2023-07-18 | 2023-08-18 | 中国人民解放军国防科技大学 | Method and device for generating countermeasure sample for radio interference identification |
-
2022
- 2022-08-25 CN CN202211027951.9A patent/CN115409705A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116614189A (en) * | 2023-07-18 | 2023-08-18 | 中国人民解放军国防科技大学 | Method and device for generating countermeasure sample for radio interference identification |
CN116614189B (en) * | 2023-07-18 | 2023-11-03 | 中国人民解放军国防科技大学 | Method and device for generating countermeasure sample for radio interference identification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mukhoti et al. | Evaluating bayesian deep learning methods for semantic segmentation | |
CN107680054B (en) | Multi-source image fusion method in haze environment | |
CN108776779B (en) | Convolutional-circulation-network-based SAR sequence image target identification method | |
CN108764006B (en) | SAR image target detection method based on deep reinforcement learning | |
CN110163275B (en) | SAR image target classification method based on deep convolutional neural network | |
CN106338733B (en) | Forward-Looking Sonar method for tracking target based on frogeye visual characteristic | |
CN112364885A (en) | Defense method for confrontation sample based on interpretability of deep neural network model | |
CN110136162B (en) | Unmanned aerial vehicle visual angle remote sensing target tracking method and device | |
Yuan et al. | Neighborloss: a loss function considering spatial correlation for semantic segmentation of remote sensing image | |
CN111539916A (en) | Image significance detection method and system for resisting robustness | |
CN110555820A (en) | Image fusion method based on convolutional neural network and dynamic guide filtering | |
US11367206B2 (en) | Edge-guided ranking loss for monocular depth prediction | |
CN114332633B (en) | Radar image target detection and identification method and equipment and storage medium | |
CN115409705A (en) | Countermeasure sample generation method for SAR image target identification model | |
CN114139631A (en) | Multi-target training object-oriented selectable ash box confrontation sample generation method | |
CN108509835B (en) | PolSAR image ground object classification method based on DFIC super-pixels | |
CN116258877A (en) | Land utilization scene similarity change detection method, device, medium and equipment | |
CN116597300A (en) | Unsupervised domain self-adaptive SAR target recognition method integrating and aligning visual features and scattering topological features | |
Dutta et al. | Weed detection in close-range imagery of agricultural fields using neural networks | |
Wang et al. | Underwater Terrain Image Stitching Based on Spatial Gradient Feature Block. | |
CN115294398A (en) | SAR image target recognition method based on multi-attitude angle joint learning | |
Zhou et al. | Complex background SAR target recognition based on convolution neural network | |
Liu et al. | A novel deep transfer learning method for sar and optical fusion imagery semantic segmentation | |
CN113409351A (en) | Unsupervised field self-adaptive remote sensing image segmentation method based on optimal transmission | |
CN112734806B (en) | Visual target tracking method and device based on peak sharp guidance confidence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |