CN113191268B - SAR target recognition countermeasure sample generation method based on depth coding network - Google Patents

SAR target recognition countermeasure sample generation method based on depth coding network Download PDF

Info

Publication number
CN113191268B
CN113191268B CN202110483002.0A CN202110483002A CN113191268B CN 113191268 B CN113191268 B CN 113191268B CN 202110483002 A CN202110483002 A CN 202110483002A CN 113191268 B CN113191268 B CN 113191268B
Authority
CN
China
Prior art keywords
encoder network
target recognition
training
synthetic aperture
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110483002.0A
Other languages
Chinese (zh)
Other versions
CN113191268A (en
Inventor
杜川
刘志博
张磊
徐世友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202110483002.0A priority Critical patent/CN113191268B/en
Publication of CN113191268A publication Critical patent/CN113191268A/en
Application granted granted Critical
Publication of CN113191268B publication Critical patent/CN113191268B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a SAR target recognition challenge sample generation method, which aims at a specific SAR target recognition model to implement targeting attack and non-targeting attack, and comprises the steps of using a loss function of the targeting attack Loss function for non-targeted attacksTraining the encoder network, and the like, so that the encoder network can quickly generate countermeasure samples of the input pictures. The method is widely applied to the technical field of radar target identification.

Description

SAR target recognition countermeasure sample generation method based on depth coding network
Technical Field
The invention relates to the technical field of radar target recognition, in particular to a SAR target recognition countermeasure sample generation method based on a depth coding network.
Background
The image generated by the synthetic aperture radar has certain complexity, so that the image generated by the synthetic aperture radar is often identified by using computer technologies such as a synthetic aperture radar target identification model and the like, higher identification accuracy can be achieved, and the implementation principle of the synthetic aperture radar target identification model or a specific processing algorithm in the synthetic aperture radar target identification model can be called as SAR-TR algorithm. However, since the synthetic aperture radar target recognition model is developed based on a deep neural network, there is a nonlinear mapping problem, and thus there is a possibility that this target in the image is recognized as a similar target.
The use of the challenge sample enables verification of the correct recognition performance of the synthetic aperture radar target recognition model. Existing challenge sample generation algorithms, including C & W algorithms, require re-iterative optimization after each challenge sample is generated to continue the generation process, and therefore the generation speed of the challenge samples is slow, especially when multiple challenge samples need to be generated consecutively, with poor real-time performance.
Term interpretation:
Synthetic Aperture Radar (SAR): synthetic Aperture Radar (SAR) is a high resolution imaging radar that can obtain high resolution radar images resembling electrophotography under meteorological conditions of extremely low visibility. The radar with a larger equivalent antenna aperture is synthesized by a data processing method for a real antenna aperture with a smaller size by utilizing the relative motion of the radar and the target.
Target identification: the image containing the object is classified. The SAR-TR (synthetic aperture radar target recognition) algorithm, which is referred to herein, refers to classifying a SAR image containing a target to distinguish the class of the target. I.e. to distinguish whether the object within this image is an aircraft or a vehicle at all.
Challenge sample: samples that misclassified the target recognition algorithm. Here, the challenge sample obtained by adding a minute disturbance to the sample can misjudge the target recognition algorithm. In brief, for a sample image x, the algorithm described herein can generate a singleClose enough to x, the human eye sees essentially no difference, but will/>Input to the target recognition algorithm can cause it to produce misclassification, called/>Is the challenge sample generated by x. /(I)The difference between the challenge sample and the original sample, referred to herein as the interference added to the original sample, is described. That is, the challenge sample may consist of the base sample plus interference. It is desirable that in case of a successful attack, the interference is small enough.
Targeted-attack (SAR-TR) means that the generated challenge sample makes the SAR-TR network make a false decision of a fixed class, for example, a picture of an arbitrary class is input, and the generated challenge sample makes the SAR-TR network make a false decision of a certain class (for example, an airplane) fixedly. The purpose of this attack is to make the network misjudge some kind of orientation.
Non-targeted attack (nontargeted-attack), meaning that the generated challenge sample causes the SAR-TR network to be non-preset, any one of which is misjudged. For example, a picture of a tank is input, and the generated challenge sample can cause the SAR-TR network to randomly and misjudge. The purpose of this attack is to make the network non-directional, as long as it is misjudged.
Disclosure of Invention
In view of at least one of the above technical problems, an object of the present invention is to provide a method for generating a SAR target recognition challenge sample based on a depth coding network.
In one aspect, a method for generating SAR target recognition challenge samples based on a depth coding network comprises:
acquiring an image to be processed; the image to be processed is used for identifying and processing a synthetic aperture radar target identification model;
inputting the image to be processed to an encoder network;
Obtaining an output result of the encoder network as a challenge sample; the countermeasure sample is used for carrying out targeting attack on the synthetic aperture radar target recognition model;
The encoder network is a full convolution neural network with equal input and output, the encoder network is trained by training samples, and the loss function used in the training process is
Wherein x represents the training samples input to the encoder network,/>And f (·) represents a mapping corresponding to the target recognition model of the synthetic aperture radar, λ represents a settable regularization term coefficient, and k represents a settable parameter.
On the other hand, the SAR target recognition challenge sample generation method based on the depth coding network comprises the following steps:
acquiring an image to be processed; the image to be processed is used for identifying and processing a synthetic aperture radar target identification model;
inputting the image to be processed to an encoder network;
Obtaining an output result of the encoder network as a challenge sample; the countermeasure sample is used for carrying out non-targeting attack on the synthetic aperture radar target recognition model;
The encoder network is a full convolution neural network with equal input and output, the encoder network is trained by training samples, and the loss function used in the training process is Wherein x represents the training sample input to the encoder network, x represents the output result of the encoder network, f (-) represents the mapping corresponding to the synthetic aperture radar target recognition model, and/>And (3) predicting components output in the real class of the training sample x by using the target recognition model of the synthetic aperture radar, wherein lambda represents a settable regularization term coefficient, and k represents a settable parameter.
Further, the SAR target recognition challenge sample generation method based on the depth coding network further comprises the following steps:
acquiring an MSTAR data set;
Cutting the image in the MSTAR data set to obtain an image slice with a fixed size; only one recognition target is included in one of the image slices;
The image slice is taken as the training sample.
Further, the training process for the encoder network includes:
initializing the encoder network using initialization parameters; the initialization parameter is an array obtained by sampling from random numbers meeting Gaussian distribution;
Inputting the training samples into the encoder network, and obtaining an output result of the encoder network;
inputting the output result of the encoder network into the SAR target recognition model to obtain the output result of the SAR target recognition model
According to the output result of the synthetic aperture radar target recognition modelCalculating a value of the loss function;
when the value of the loss function does not meet the training ending condition, using an Adam optimizer to update parameters of the encoder network;
and ending the training process when the value of the loss function meets the training ending condition.
Further, each convolution layer in the encoder network comprises a batch of normalization layers, and the last convolution layer of the encoder network uses an activation function of
Further, the synthetic aperture radar target recognition model is trained, and training samples for training the synthetic aperture radar target recognition model are the same as training samples for training the encoder network.
In another aspect, embodiments of the present invention further include a computer apparatus including a memory for storing at least one program and a processor for loading the at least one program to perform the depth-coded network-based SAR target identification challenge sample generation method described in the embodiments.
In another aspect, embodiments of the present invention further include a storage medium having stored therein a processor-executable program which, when executed by a processor, is configured to perform the depth-coded-network-based SAR target identification challenge sample generation method described in the embodiments.
The beneficial effects of the invention are as follows: the encoder network obtained through training by the training method in the embodiment can quickly generate the countermeasure sample, and the generated countermeasure sample can perform targeted attack or non-targeted attack on the synthetic aperture radar target recognition model, so that the recognition performance of the synthetic aperture radar target recognition model is checked. Because the improved loss function is used in the training process of the encoder network, the encoder network obtained through training can generate the countermeasure sample of the input image after training, and the generation speed of the countermeasure sample is high and the real-time performance is good.
Drawings
FIG. 1 is a block diagram of an encoder network coupled to a synthetic aperture radar target recognition model in an embodiment;
FIG. 2 is a schematic diagram of a base sample A input into an encoder network in the case of a targeted attack in an embodiment;
FIG. 3 is a schematic diagram of a corresponding challenge sample output after the encoder network receives the original A shown in FIG. 2, in an embodiment;
FIG. 4 is a sample A schematic of an input into an encoder network in case of an untargeted attack in an embodiment;
fig. 5 is a schematic diagram of a corresponding challenge sample output after the encoder network receives the original a shown in fig. 4, in an embodiment.
Detailed Description
In this embodiment, a synthetic aperture radar target recognition model based on a convolutional neural network is developed to recognize a synthetic aperture radar image, and a challenge sample is generated through an encoder network, and the challenge sample may be used to attack a recognition process of the synthetic aperture radar target recognition model, so as to verify recognition performance of the synthetic aperture radar target recognition model.
In this embodiment, an encoder network with equal output and input is constructed, and the encoder network is a full convolution network composed of four convolution layers. The last layer of the encoder network also uses an activation functionWherein w is the result of the last convolution, the activation function has the function of enabling the value of the output image to be between 0 and 1, inputting an image sample x, and the output of the encoder network is the corresponding countermeasure sample/>Specifically, the encoder network is composed of four 1*1 convolutional layers, each of which has a convolution kernel size of 1 and a step size of 1, so that the output image and the input image are equal in size. The convolution layer of each layer contains a Batch Normalization layer, namely a Batch Normalization layer, so that overfitting can be avoided. The last convolutional layer of the encoder network contains an activation function/>Where w is the output of the last convolution, the effect of the activation function is to bring the output to a range of 0, 1.
The samples input to the encoder network are noted as x, the output of the encoder network is notedWhere E (-) represents the mapping of the encoder network. Record/>To combat the sample/>The difference from the input sample x, delta, is denoted as the disturbance added to the input sample x.
The encoder network is trained to have the ability to quickly generate challenge samples. The training targets have two directions, namely, the encoder network has the performance of quickly generating an countermeasure sample, and the countermeasure sample is used for carrying out targeting attack on the synthetic aperture radar target recognition model; secondly, the encoder network has the performance of quickly generating the challenge sample, and the challenge sample is used for carrying out non-targeting attack on the synthetic aperture radar target recognition model. By targeted attack is meant the generation ofAnd misjudging the SAR-TR algorithm to a specific preset category. By non-targeted attack is meant the generated/>And any misjudgment is caused to the SAR-TR algorithm.
For targeted attacks, the loss function can be designed based on the following idea: in a targeted attack, it is desirable that the challenge sample generated meet the following two requirements: (1) the challenge sample differs as little as possible from the original sample; (2) misjudgment of targeting SAR-TR algorithm against the sample. The mapping corresponding to the SAR-TR network is denoted as f (·), and the SAR-TR algorithm classification result is denoted as C (·), where for the input x, C (x) = armgmax i(f(x)i). Comparing the generated challenge samplesInput to SAR-TR network to obtain output/>/>For input x, we want to generate an challenge sample that can targetably misjudge the SAR-TR model to be of some class t, and then formally describe this problem as the constraint optimization problem:
s.t.C(x+δ)=Ct,
x+δ∈[0,1]w*h
Where D is the distance metric function, here used as the L 2 norm. Delta is the interference added to the original sample. Since C (x+δ) =c t is nonlinear and non-differentiable, the condition is replaced with the equivalent condition below. An objective function g (-) is defined such that g (-) is less than or equal to 0 and only if C (x+delta) =c t. In the present embodiment, a kind of a material is given Wherein (e) + is a shorthand for max (e, 0). The original optimization problem is converted into the following optimization problem:
the distance metric function D (·) used is L 2 norm, i.e Using the lagrangian multiplier method, the original problem is changed into the following optimization problem: /(I)Where λ is a self-setting regularized term coefficient. To enhance the robustness of the attack, we will/>Modified as/>Where k.epsilon.0, 1 is a self-setting parameter. Finally, the loss function/> of the targeting attack is obtained Wherein/>Representing the generated challenge sample/>Distance L 2 from sample x. λ and k are both self-setting parameters, λ=1, k=0.1 in this embodiment.
For non-targeted attacks, it is desirable for the encoder network to generate the challenge samplesIf the difference from the original sample is as small as possible, any kind of misjudgment of SAR-TR can be caused. Thus, the constructed loss function isWherein/>A component representing the SAR-TR prediction output in the true class of the sample represents the probability that the sample is predicted to be correct. λ and k are both self-setting parameters, λ=1, k=0.1 in this embodiment.
After the loss function is designed, the encoder network is trained using the loss function. If it is desired that the challenge sample generated by the trained encoder network be targeted against the synthetic aperture radar target recognition model, then use is made ofThe encoder network is trained as a loss function. If it is desired that the challenge sample generated by the trained encoder network is capable of non-targeted attack on the synthetic aperture radar target recognition model, then use/> The encoder network is trained as a loss function. In addition, other content of both training procedures may be the same.
The step of training the encoder network may comprise:
S1, acquiring an MSTAR data set; specifically, downloading an MSTAR dataset, wherein the MSTAR dataset comprises SAR images of different types of targets, preprocessing the dataset to obtain 5950 image slices with the size of 128 x 128 and comprising targets, classifying the images into ten groups, each group comprises one type of targets, using 2747 SAR image slices with the elevation angle of 17 degrees as a training set, using 3203 Zhang Angjiao SAR image slices with the elevation angle of 15 degrees as a test set, normalizing the gray scales of the images to be [0,1], and carrying out random scaling rotation and other forms of data enhancement to avoid the over-fitting problem;
S2, cutting an image in the MSTAR data set to obtain an image slice with a fixed size; only one recognition target is included in one image slice;
S3, taking the image slice as a training sample;
S4, initializing an encoder network by using an initialization parameter, wherein the initialization parameter is an array obtained by sampling from random numbers meeting Gaussian distribution, specifically, randomly sampling from Gaussian distribution with a mean value of 0 and a variance of 0.01, and taking the randomly sampled array as the initialization parameter of the encoder network;
S5, inputting the training sample into an encoder network, and obtaining an output result of the encoder network; specifically, setting batch_size=64, and inputting training samples into the encoder network to obtain the output result of the encoder
S6, inputting the output result of the encoder network into a synthetic aperture radar target recognition model to obtain the output result of the synthetic aperture radar target recognition modelSpecifically, the output result of the encoder/>Inputting the target recognition model into a synthetic aperture radar target recognition model to obtain the output of the synthetic aperture radar target recognition model;
S7, according to the output result of the synthetic aperture radar target recognition model Calculating a value of a loss function; in particular, if it is desired that the challenge sample generated by the trained encoder network is capable of targeting attacks on the synthetic aperture radar target recognition model, then use/>As a loss function, if it is desired that the challenge sample generated by the trained encoder network be able to conduct a non-targeted attack on the synthetic aperture radar target recognition model, then use/>As a loss function;
S8, when the value of the loss function does not meet the training ending condition, using an Adam optimizer to update parameters of the encoder network; specifically, an Adam optimizer with a learning rate of 10 -4 and a standard super parameter β 1=0.5,β2 =0.999 is used for parameter updating, and in the parameter updating process, only the parameters of the encoder network can be updated, but the parameters of the synthetic aperture radar target recognition model are not updated;
after the step S8 is performed, the process may return to the step S5 to continue to sequentially perform the steps;
s9, when the value of the loss function meets the training ending condition, ending the training process.
After training of the encoder network is completed, the encoder network may be used in a process of generating the challenge sample. In performing the process of using the encoder network to generate challenge samples, the encoder network may be connected to a synthetic aperture radar target recognition model as shown in fig. 1. Fig. 1 includes an end-to-end neural network model, wherein the first half is an encoder network and the second half is a synthetic aperture radar target recognition model, wherein the synthetic aperture radar target recognition model is a ResNet network.
The process of generating challenge samples using the encoder network specifically comprises the steps of:
p1, obtaining an image to be processed; the image to be processed is used for identifying and processing a synthetic aperture radar target identification model;
P2. inputting the image to be processed into an encoder network;
p3. obtaining the output result of the encoder network as a challenge sample.
In step P1, the image to be processed is an image to be input into the synthetic aperture radar target recognition model for recognition processing, which may include an airplane or an automobile waiting for recognition of a target. In step P2, the image to be processed is input to the trained encoder network, and the output result of the encoder network is the challenge sample.
If the encoder network used in step P2 is used in the training process As a loss function, then the challenge samples of the encoder network output obtained in step P3 may be used to target attack the synthetic aperture radar target recognition model.
If the encoder network used in step P2 is used in the training process As a loss function, then the challenge samples of the encoder network output obtained in step P3 may be used to conduct non-targeted attacks on the synthetic aperture radar target recognition model.
In this embodiment, the synthetic aperture radar target recognition model used to target or target the challenge sample is also trained. The training samples used for training the synthetic aperture radar target recognition model may be the same as the training samples used for training the encoder network, i.e. the training samples obtained by steps S1-S3 may be used for training the synthetic aperture radar target recognition model.
The encoder network obtained through training by the training method in the embodiment can quickly generate the countermeasure sample, and the generated countermeasure sample can perform targeted attack or non-targeted attack on the synthetic aperture radar target recognition model, so that the recognition performance of the synthetic aperture radar target recognition model is checked. Because the improved loss function is used in the training process of the encoder network, the encoder network obtained through training does not need iterative optimization after training, so that the generation speed of the challenge sample is high, and particularly, the real-time performance is good when a plurality of challenge samples are required to be generated continuously.
And performing simulation experiments on the encoder network in a software environment based on Python on a computer based on a CPU, a 32G memory and a V100 video card with the main frequency of 8 GHZ. Wherein the encoder network is trained by the training method in this embodiment.
The simulation experiment compares several different challenge sample generation methods of the existing FGSM, BIM, JSMA, deepFool, C & W method and the method described in the present invention. Training is carried out by using the training set, testing is carried out by using the testing set, and accuracy and time for generating an countermeasure sample are calculated.
The original sample a input into the encoder network in the case of a targeted attack is shown in fig. 2, and the corresponding challenge sample output by the encoder network is shown in fig. 3. Training process use of the encoder network As a function of loss.
The original sample a input into the encoder network in the case of an untargeted attack is shown in fig. 4, and the corresponding challenge sample output by the encoder network is shown in fig. 5. Training process use of the encoder network As a function of loss.
Table 1 compares attack success rate, disturbance power and generation time for several algorithms for targeted attacks. Table 2 compares the success rate, disturbance power and generation time of several algorithms for non-targeted attacks.
As can be seen from tables 1 and 2, for the targeted attack and the non-targeted attack, the training method provided by the invention has very excellent real-time performance, and is far faster than the existing algorithm listed in the table, and the time required for generating a challenge sample is only 0.0005s and 0.0006s under the condition that the attack success rate is almost the same as that of the optimal algorithm. Under the condition of comprehensively considering attack success rate, disturbance power and generation time, the training method provided by the invention is superior to the existing algorithm listed in the table.
Experiments also tested the robustness of the algorithm of the present invention, and table 3 shows the attack success rate under the noise addition condition by adding noise of different powers to the input samples of the present invention. The training method provided by the invention has good robustness for noise with different powers, and the attack success rate is not reduced.
Table 1 success rate of each algorithm for targeted attack, power of interference, generation time comparison
Algorithm Success rate (%) Interference power Time of generation(s)
FGSM 62.28 0.3169 0.0148
BIM 97.93 0.1671 1.1119
PGD 97.22 0.2151 0.9129
C&W 98.72 0.0815 1.5091
The invention is that 97.66 0.1262 0.0005
Table 2 success rate, interference power, and time of generation of each algorithm for non-targeted attack
TABLE 3 attack success rate after noise is added to the input
Through the simulation experiment, it can be confirmed that the generated countermeasure sample has high attack success rate and good noise robustness through the encoder network trained by the training method of the embodiment, and the speed of generating the countermeasure sample is very high, so that the defect that the real-time performance cannot be achieved in the prior art is overcome.
The challenge sample generation method for the synthetic aperture radar target recognition in the present embodiment may be performed by writing a computer program that performs the challenge sample generation method for the synthetic aperture radar target recognition in the present embodiment, writing the computer program into a computer device or a storage medium, and when the computer program is read out to run.
It should be noted that, unless otherwise specified, when a feature is referred to as being "fixed" or "connected" to another feature, it may be directly or indirectly fixed or connected to the other feature. Further, the descriptions of the upper, lower, left, right, etc. used in this disclosure are merely with respect to the mutual positional relationship of the various components of this disclosure in the drawings. As used in this disclosure, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. In addition, unless defined otherwise, all technical and scientific terms used in this example have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the description of the embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used in this embodiment includes any combination of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element of the same type from another. For example, a first element could also be termed a second element, and, similarly, a second element could also be termed a first element, without departing from the scope of the present disclosure. The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed.
It should be appreciated that embodiments of the invention may be implemented or realized by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer readable storage medium configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, in accordance with the methods and drawings described in the specific embodiments. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Furthermore, the operations of the processes described in the present embodiments may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes (or variations and/or combinations thereof) described in this embodiment may be performed under control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications), by hardware, or combinations thereof, that collectively execute on one or more processors. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable computing platform, including, but not limited to, a personal computer, mini-computer, mainframe, workstation, network or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optical read and/or write storage medium, RAM, ROM, etc., such that it is readable by a programmable computer, which when read by a computer, is operable to configure and operate the computer to perform the processes described herein. Further, the machine readable code, or portions thereof, may be transmitted over a wired or wireless network. When such media includes instructions or programs that, in conjunction with a microprocessor or other data processor, implement the steps described above, the invention described in this embodiment includes these and other different types of non-transitory computer-readable storage media. The invention also includes the computer itself when programmed according to the methods and techniques of the present invention.
The computer program can be applied to the input data to perform the functions described in this embodiment, thereby converting the input data to generate output data that is stored to the non-volatile memory. The output information may also be applied to one or more output devices such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including specific visual depictions of physical and tangible objects produced on a display.
The present invention is not limited to the above embodiments, but can be modified, equivalent, improved, etc. by the same means to achieve the technical effects of the present invention, which are included in the spirit and principle of the present invention. Various modifications and variations are possible in the technical solution and/or in the embodiments within the scope of the invention.

Claims (5)

1. The SAR target recognition challenge sample generation method based on the depth coding network is characterized by comprising the following steps of:
acquiring an image to be processed; the image to be processed is used for identifying and processing a synthetic aperture radar target identification model;
inputting the image to be processed to an encoder network;
Obtaining an output result of the encoder network as a challenge sample; the countermeasure sample is used for carrying out targeting attack on the synthetic aperture radar target recognition model;
The encoder network is a full convolution neural network with equal input and output, the encoder network is trained by training samples, and the loss function used in the training process is
Wherein x represents the training samples input to the encoder network,/>Representing the output result of the encoder network, f (·) representing a mapping corresponding to a synthetic aperture radar target recognition model, λ representing a set regularized term coefficient, and k representing a set parameter;
the step of training the encoder network comprises:
S1, acquiring an MSTAR data set; specifically, an MSTAR data set is downloaded, wherein the MSTAR data set comprises SAR images of different types of targets, the data set is preprocessed to obtain a plurality of image slices comprising targets, each group comprises ten groups of targets, the SAR image slices are used as training sets, the SAR image slices are used as test sets, the gray scales of the image slices are normalized to be [0,1], and data enhancement in a random scaling rotation form is carried out;
S2, cutting an image in the MSTAR data set to obtain an image slice with a fixed size; only one recognition target is included in one image slice;
S3, taking the image slice as a training sample;
s4, initializing an encoder network by using an initialization parameter, wherein the initialization parameter is an array obtained by sampling from random numbers meeting Gaussian distribution, specifically, randomly sampling from Gaussian distribution, and taking the randomly sampled array as the initialization parameter of the encoder network;
S5, inputting the training sample into an encoder network, and obtaining an output result of the encoder network; specifically, training samples are input into an encoder network to obtain an output result of an encoder
S6, inputting the output result of the encoder network into a synthetic aperture radar target recognition model to obtain the output result of the synthetic aperture radar target recognition modelSpecifically, the output result of the encoder/>Inputting the target recognition model into a synthetic aperture radar target recognition model to obtain the output of the synthetic aperture radar target recognition model;
S7, according to the output result of the synthetic aperture radar target recognition model Calculating a value of a loss function; in particular, when the challenge samples generated by the trained encoder network are used to target attack the synthetic aperture radar target recognition model, then use/>As a loss function, when the challenge samples generated by the trained encoder network are used to non-target attack the synthetic aperture radar target recognition model, then use/>As a loss function;
S8, when the value of the loss function does not meet the training ending condition; specifically, an Adam optimizer is used for parameter updating, and in the parameter updating process, only the parameters of the encoder network are updated, but the parameters of the synthetic aperture radar target recognition model are not updated;
After the step S8 is executed, returning to the step S5 to continuously execute the steps in sequence;
s9, when the value of the loss function meets the training ending condition, ending the training process.
2. The depth coding network-based SAR target identification challenge sample generation method of claim 1, further comprising:
acquiring an MSTAR data set;
Cutting the image in the MSTAR data set to obtain an image slice with a fixed size; only one recognition target is included in one of the image slices;
The image slice is taken as the training sample.
3. The depth coding network-based SAR target recognition challenge sample generation method of claim 1, wherein the training process for the encoder network comprises:
initializing the encoder network using initialization parameters; the initialization parameter is an array obtained by sampling from random numbers meeting Gaussian distribution;
Inputting the training samples into the encoder network, and obtaining an output result of the encoder network;
inputting the output result of the encoder network into the SAR target recognition model to obtain the output result of the SAR target recognition model
According to the output result of the synthetic aperture radar target recognition modelCalculating a value of the loss function;
when the value of the loss function does not meet the training ending condition, using an Adam optimizer to update parameters of the encoder network;
and ending the training process when the value of the loss function meets the training ending condition.
4. The method for generating SAR target recognition challenge samples based on depth coding network according to claim 1, wherein each convolution layer in said encoder network comprises a batch of normalized layers, and the last convolution layer of said encoder network uses an activation function of
5. The depth coding network-based SAR target recognition challenge sample generation method of claim 1, wherein the synthetic aperture radar target recognition model is trained, and the training samples for training the synthetic aperture radar target recognition model are the same as the training samples for training the encoder network.
CN202110483002.0A 2021-04-30 2021-04-30 SAR target recognition countermeasure sample generation method based on depth coding network Active CN113191268B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110483002.0A CN113191268B (en) 2021-04-30 2021-04-30 SAR target recognition countermeasure sample generation method based on depth coding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110483002.0A CN113191268B (en) 2021-04-30 2021-04-30 SAR target recognition countermeasure sample generation method based on depth coding network

Publications (2)

Publication Number Publication Date
CN113191268A CN113191268A (en) 2021-07-30
CN113191268B true CN113191268B (en) 2024-04-23

Family

ID=76983611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110483002.0A Active CN113191268B (en) 2021-04-30 2021-04-30 SAR target recognition countermeasure sample generation method based on depth coding network

Country Status (1)

Country Link
CN (1) CN113191268B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664894A (en) * 2018-04-10 2018-10-16 天津大学 The human action radar image sorting technique of neural network is fought based on depth convolution
CN111027439A (en) * 2019-12-03 2020-04-17 西北工业大学 SAR target recognition method for generating countermeasure network based on auxiliary classification
CN112216273A (en) * 2020-10-30 2021-01-12 东南数字经济发展研究院 Sample attack resisting method for voice keyword classification network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664894A (en) * 2018-04-10 2018-10-16 天津大学 The human action radar image sorting technique of neural network is fought based on depth convolution
CN111027439A (en) * 2019-12-03 2020-04-17 西北工业大学 SAR target recognition method for generating countermeasure network based on auxiliary classification
CN112216273A (en) * 2020-10-30 2021-01-12 东南数字经济发展研究院 Sample attack resisting method for voice keyword classification network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于深度学习的雷达图像目标识别研究进展;潘宗序;安全智;张冰尘;;中国科学:信息科学(12);第98-111页 *
基于深度生成网络的特征学习方法;杜川;《中国博士学位论文全文数据库》;第I136-120页 *

Also Published As

Publication number Publication date
CN113191268A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
US11448753B2 (en) System and method for transferring electro-optical (EO) knowledge for synthetic-aperture-radar (SAR)-based object detection
Pei et al. SAR automatic target recognition based on multiview deep learning framework
CN108427927B (en) Object re-recognition method and apparatus, electronic device, program, and storage medium
Sato et al. Apac: Augmented pattern classification with neural networks
CN109325490B (en) Terahertz image target identification method based on deep learning and RPCA
CN110895682B (en) SAR target recognition method based on deep learning
Fu et al. Aircraft recognition in SAR images based on scattering structure feature and template matching
Zheng et al. Fast ship detection based on lightweight YOLOv5 network
CN108345856B (en) SAR automatic target recognition method based on heterogeneous convolutional neural network integration
CN113095333B (en) Unsupervised feature point detection method and unsupervised feature point detection device
US20220237465A1 (en) Performing inference and signal-to-noise ratio based pruning to train sparse neural network architectures
US11366987B2 (en) Method for determining explainability mask by neural network, system and medium
Liu et al. Target recognition in synthetic aperture radar images via joint multifeature decision fusion
CN112001488A (en) Training generative antagonistic networks
CN111223128A (en) Target tracking method, device, equipment and storage medium
CN111242228B (en) Hyperspectral image classification method, hyperspectral image classification device, hyperspectral image classification equipment and storage medium
US11468294B2 (en) Projecting images to a generative model based on gradient-free latent vector determination
Du et al. Physical-related feature extraction from simulated SAR image based on the adversarial encoding network for data augmentation
CN114841227A (en) Modifying a set of parameters characterizing a computer vision model
CN111461177B (en) Image identification method and device
CN113191268B (en) SAR target recognition countermeasure sample generation method based on depth coding network
EP3862926A1 (en) Method of identifying filters in a neural network, system and storage medium of the same
Kutluk et al. Classification of hyperspectral images using mixture of probabilistic PCA models
CN113065617A (en) Object recognition method, object recognition device, computer equipment and storage medium
Antsiperov Object identification on low-count images by means of maximum-likelihood descriptors of precedents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant