CN116522133A - HRRP sample generation method based on SACGAN model - Google Patents
HRRP sample generation method based on SACGAN model Download PDFInfo
- Publication number
- CN116522133A CN116522133A CN202310282003.8A CN202310282003A CN116522133A CN 116522133 A CN116522133 A CN 116522133A CN 202310282003 A CN202310282003 A CN 202310282003A CN 116522133 A CN116522133 A CN 116522133A
- Authority
- CN
- China
- Prior art keywords
- sample
- model
- sacgan
- hrrp
- samples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000012549 training Methods 0.000 claims abstract description 73
- 238000010606 normalization Methods 0.000 claims abstract description 13
- 238000001228 spectrum Methods 0.000 claims abstract description 12
- 230000006870 function Effects 0.000 claims description 19
- 238000000605 extraction Methods 0.000 claims description 10
- 238000010586 diagram Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 239000004973 liquid crystal related substance Substances 0.000 claims 2
- 238000004880 explosion Methods 0.000 abstract description 6
- 230000008034 disappearance Effects 0.000 abstract description 3
- 238000012360 testing method Methods 0.000 description 11
- 238000004088 simulation Methods 0.000 description 10
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 230000008030 elimination Effects 0.000 description 3
- 238000003379 elimination reaction Methods 0.000 description 3
- 238000010008 shearing Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000008485 antagonism Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/418—Theoretical aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method for generating an HRRP sample based on a SACGAN model, which comprises the following steps: generating a small sample training set based on the radar echo; the small sample training set comprises a plurality of HRRP samples of categories, and each HRRP sample is provided with a corresponding category label; constructing a SACGAN model comprising a generator, a discriminator and an auxiliary classifier based on self-attention and spectrum normalization; the HRRP sample and the category label are input into a SACGAN model, and the SACGAN model is trained based on a hinge loss function; and generating a sample expansion training set by using the trained SACGAN model. The method not only solves the problem of poor quality of the generated HRRP sample caused by gradient explosion or gradient disappearance condition of the discriminator when the existing CWGAN model expands the HRRP sample; the problem of poor quality of generated samples when the HRRP samples are expanded by using the CACGAN model is also solved.
Description
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a HRRP sample generation method based on a SACGAN model.
Background
Radar high resolution range profile (High Resolution Range Profile, HRRP) refers to the fact that in broadband radar, the echo signal of the object under test can be regarded as the vector sum of all scattered echoes within each resolved range bin. HRRP is one-dimensional information, contains characteristic information such as the geometric structure of a target and the energy distribution of scattering points, has the advantages of easy acquisition, storage, processing and the like compared with two-dimensional echo signals (SAR and ISAR), and is quite valuable for radar target identification and classification. However, in building an HRRP identification database of enemy non-cooperative targets, it is difficult for the radar to detect and keep track of targets, and thus it is difficult to obtain a sufficient HRRP sample. When the incomplete HRRP recognition database sample is used as a training set to train the recognition system, the recognition performance and generalization capability of the classification system are affected because the input HRRP sample number is small, so that the difference between the in-library samples and the out-library samples is large, and the features extracted by the recognition system cannot represent the essential characteristics of the target.
The advanced repair discloses a HRRP recognition database sample expansion method for generating an countermeasure network CWGAN (Conditional Wasserstein Generative Adversarial Networks) based on a condition Wasserstein in a published paper of 'research and implementation of a radar one-dimensional range profile target recognition method based on deep learning' (6 months in 2019 of the university of electronic technology and university of Shuoshi). The method comprises the steps of firstly carrying out data preprocessing on the obtained HRRP data and dividing a training set and a testing set. And then constructing a CWGAN network structure consisting of two modules of a generator and a discriminator. And designing the loss functions of the generator and the discriminator, and optimizing the loss functions of the discriminating module by adopting a weight shearing method. And finally, generating HRRP data by using the CWGAN model, and carrying out data enhancement on the original data set. The method has the defects that the loss function of the discriminator in the CWGAN model is optimized by adopting a weight shearing method, so that gradient explosion or gradient disappearance of the discriminator occurs, and the quality of HRRP data generated by the CWGAN model is poor.
Ma Pei in the patent document "CACGAN-based HRRP identification database sample expansion method" (patent application No. 202110283773.5, application publication No. 112784930A), a method for expanding a HRRP identification database sample based on generating an countermeasure network CACGAN is disclosed. The method comprises the steps of firstly splicing a sample with a corresponding class label and then using the spliced sample as input for generating an countermeasure network CACGAN. And then in the network structure design stage, constructing a conditional auxiliary classification generation network consisting of a generator, a discriminator and an auxiliary classifier, adding a gradient penalty term into a loss function of the discriminator, calculating the loss value of the auxiliary classifier by using the cross entropy loss function, and finally generating HRRP samples of different categories by using a CACGAN model to expand the identification database samples. The method has the defects that the CACGAN model is more concerned about the separability of the generated samples, the diversity of the generated samples is lower and the quality is poorer under the condition of better separability, the separability and the diversity of the generated samples are not balanced, and the recognition performance of a classification system trained after the generated HRRP data is used for recognizing the database samples is lower.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a HRRP sample generation method based on a SACGAN model, which is used for solving the problems of poor stability of a discriminator and poor quality of a generated HRRP sample caused by gradient explosion or gradient disappearance of the discriminator when the HRRP identification database sample is expanded by using the CWGAN model; when the CACGAN model is used for carrying out HRRP recognition database sample expansion, under the condition that the generated sample separability and the generated sample quality are not balanced, the generated sample separability is strong but the generated sample is poor, and the recognition performance of the classifier trained after the recognition database sample expansion is low. The technical problems to be solved by the invention are realized by the following technical scheme:
in a first aspect, the present invention provides a method for generating HRRP samples based on a SACGAN model, including:
step 1: generating a small sample training set based on the radar echo; the small sample training set comprises HRRP samples of a plurality of categories, and each HRRP sample is provided with a corresponding category label;
step 2: constructing a SACGAN model comprising a generator, a discriminator and an auxiliary classifier based on self-attention and spectrum normalization;
step 3: inputting the HRRP sample and the category label into the SACGAN model, and training the SACGAN model based on a hinge loss function;
step 4: and generating a sample expansion training set by using the trained SACGAN model.
In a second aspect, the present invention provides a radar target recognition method, including:
constructing an HRRP sample set by adopting the method described in the embodiment;
training a network model of a radar target recognition system by using the HRRP sample set until the network converges;
and inputting the sample to be identified into a trained radar target identification system network to obtain an identification result.
The invention has the beneficial effects that:
in the HRRP sample generation method based on the SACGAN model, firstly, a SACGAN model comprising a generator, a discriminator and an auxiliary classifier is constructed based on self-attention and spectrum normalization; on one hand, the auxiliary classifier introduced by the model uses the class label for auxiliary training, so that under the condition of fewer training set samples, the deviation of the network on the feature extraction and feature selection of the data is smaller, and the quality of the generated samples is higher; on the other hand, a self-attention module is introduced into the generator and the discriminator, so that the global information of the sample, important features of concerned data, effective learning key information and the efficiency and performance of a network can be captured, and the quality of the sample generated by the model is higher; meanwhile, the spectrum normalization method is used, so that the arbiter meets Lipschitz constraint, the gradient of the arbiter is limited in a constant range, the arbiter can be transmitted back to a generator to obtain a better gradient, and the problems of gradient elimination and gradient explosion of the GAN model are solved; finally, a hinge loss function is used in the network training process, and only samples which are not reasonably judged to be true or false can influence the gradient, so that the training process of the model is more stable. Therefore, compared with the existing HRRP sample generation method, the HRRP sample generated by the method provided by the invention has higher quality, and further, the accuracy of radar system target identification can be improved.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a schematic flow chart of an HRRP sample generating method based on a SACGAN model according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a generator in a SACGAN model according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a discriminator and an auxiliary classifier in a SACGAN-based model according to an embodiment of the invention;
fig. 4 is a schematic flow chart of a radar target recognition method according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of an HRRP sample generating method based on a SACGAN model according to an embodiment of the present invention, which includes:
step 1: generating a small sample training set based on the radar echo; the small sample training set comprises a plurality of HRRP samples of a plurality of categories, and each HRRP sample has a corresponding category label.
Specifically, step 1 includes:
11 A plurality of classes of HRRP samples in radar returns along a distance dimension on a radar line of sight are extracted.
12 Setting class labels for each HRRP sample, and combining all the processed samples and the corresponding class labels into a small sample training set.
Optionally, as an implementation manner, the processing of setting a category label on the HRRP sample in this embodiment is as follows:
the class label of each sample with class number 1 in the training set of small samples is marked as y 1 The class label of each sample with class number 2 is marked as y 2 … the class label of each sample with class number U is denoted as y U ,y 1 Take the value of 1, y 2 Take a value of 2, …, y U And the value is U, and U represents the total number of class labels in the small sample training set.
Step 2: SACGAN (Self-attention Auxiliary Classifier Generative Adversarial Networks) comprising a generator, a discriminant and an auxiliary classifier is constructed based on Self-attention and spectral normalization, the Self-attention auxiliary classifier generating an countermeasure network model.
21 A generator is built comprising one first fully connected layer, four first convolution modules, two first self-attention modules and a first output layer.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a generator in a SACGAN model according to an embodiment of the invention; the device comprises a first full connection layer, a first convolution module, a first attention module, a second first convolution module, a second first attention module, a third first convolution module and a fourth first convolution module which are sequentially connected to form a feature extraction part of a generator.
Optionally, in this embodiment, the first convolution module includes an upsampling layer, a convolution layer, a spectrum normalization layer, and a ReLU in order; the sliding step length of the up-sampling layer is set to 2, the convolution kernel sizes are all set to 1×3, and the number of characteristic diagram channels is 512, 256, 128 and 64 in sequence. The convolution kernel sizes used in the first self-attention module QKV are all 1×1. The first output layer is composed of a full connection layer and Tanh, and the node number of the full connection layer is set to 4096.
22 A second full connection layer, five second convolution modules and two second self-attention modules and a second output layer.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a discriminator and an auxiliary classifier in a SACGAN-based model according to an embodiment of the invention; the first second convolution module, the second convolution module, the third second convolution module, the first second attention module, the fourth second convolution module, the second attention module and the fifth second convolution module are sequentially connected to form a feature extraction part of the discriminator.
Optionally, in this embodiment, the second convolution module includes a convolution layer, a spectrum normalization layer, and a LeakyReLU in order; the size of the convolution kernel is set to be 1 multiplied by 3, and the number of the characteristic diagram channels is sequentially 1, 32, 64, 128 and 256; the convolution kernel sizes used in the second self-attention module QKV are all 1×1. The second output layer is composed of a full connection layer and Tanh, and the node number of the full connection layer is set to 4096.
23 Building an auxiliary classifier which comprises a feature extraction part and a third output layer; the third output layer includes a fully connected layer and Softmax.
Alternatively, in the present embodiment, the feature extraction portion of the auxiliary classifier is shared with the arbiter, as shown in fig. 3.
24 A generator, a arbiter, and an auxiliary classifier are combined to obtain a SACGAN model.
The SACGAN model constructed by the method introduces an auxiliary classifier on one hand and uses the class labels for auxiliary training, so that under the condition of less training set samples, the deviation of the network on the feature extraction and feature selection of the data is smaller, and the quality of the generated samples is higher; on the other hand, a self-attention module is introduced into the generator and the discriminator, so that the global information of the sample, important features of concerned data, effective learning key information and the efficiency and performance of a network can be captured, and the quality of the sample generated by the model is higher; meanwhile, the spectrum normalization method is used, so that the arbiter meets Lipschitz constraint, the gradient of the arbiter is limited in a constant range, the arbiter can be transmitted back to a generator to obtain a better gradient, and the problems of gradient elimination and gradient explosion of the GAN model are solved.
Step 3: the HRRP sample and class label are input into the SACGAN model, and the SACGAN model is trained based on the hinge loss function.
It will be appreciated that training parameters also need to be set before training the built SACGAN model.
Preferably, the learning rate of the generator is set to 0.0002, and the learning rate of the arbiter is set to 0.0005, so as to train the SACGAN model.
Specifically, step 3 includes:
31 Randomly generating a number of labeled embedded noise samples based on a gaussian distribution and using them as inputs to a generator in the SACGAN model to obtain a set of generated samples of a specified class.
Firstly, randomly sampling N noise samples from Gaussian distribution, randomly generating N class labels, and embedding the N class labels into the noise samples after coding to obtain embedded noise samples.
Then, the embedded noise samples are used as input of a generator to generate generation samples of a specified class, and a generation sample set is formed.
32 The generated samples in the generated sample set and the HRRP samples in the small sample training set, namely the real samples, are input into a discriminator of the SACGAN model so as to output the probability that each sample is judged to be the real sample; meanwhile, the generated samples in the generated sample set and the HRRP samples in the small sample training set are input into an auxiliary classifier of the SACGAN model, and the probability that each sample is classified into each class is output.
33 Respectively calculating the loss function value of the discriminator and the loss function value of the generator in the SACGAN model at the current iteration.
Specifically, the calculation formula of the loss function value of the discriminator is:
wherein L is D Representing the loss value of the discriminator in the current iteration process, and determining the loss and classifying the loss L c Composition, x, represents a set of real samples sampled from a small sample training set, p r Representing the distribution of real data, p z Representing a gaussian distribution, y representing a class label corresponding to each sample, z representing a set of noise samples sampled from the gaussian distribution, C (y|x) representing a probability of the auxiliary classifier correctly classifying the input real samples, G (z|y) representing a set of generated samples generated by the generator, C (y|g (z|y)) representing a probability of the classifier correctly classifying the input generated samples, D (x|y) representing a probability of the discriminator judging the input real samples as true, D (G (x|y)) representing a probability of the discriminator judging the input generated samples as true, E (·) representing a desired operation;
the calculation formula of the loss function value of the generator is:
wherein L is G Representing the loss value of the generator in the current iteration process, z represents a group of noise samples sampled from the Gaussian distribution, y represents the class label corresponding to each sample, and p z Representing a gaussian distribution, G (z|y) represents a set of generated samples generated by the generator, C (y|g (y|z)) represents the probability that the classifier correctly classifies the input generated samples, and E (·) represents the desired processing.
34 Based on Adam method, the loss value of the arbiter and the loss value of the generator in the SACGAN model are used for updating the network parameters in sequence during the current iteration until convergence, and the network parameters of the generator in the SACGAN model are stored after training is completed.
According to the embodiment, the hinge loss function is used in the network training process, and only samples which are not reasonably judged to be true or false can influence the gradient, so that the training process of the model is more stable.
Step 4: and generating a sample expansion training set by using the trained SACGAN model.
41 Initializing a generator in the SACGAN model with the trained network parameters.
42 Randomly generating noise samples with the same number as each category of targets in the small sample training set from the Gaussian distribution, and inputting the noise samples and the corresponding category labels into a generator to generate a generated sample set.
43 Combining the generated sample set with the small sample training set to form an extended sample training set.
In the HRRP sample generation method based on the SACGAN model, firstly, a SACGAN model comprising a generator, a discriminator and an auxiliary classifier is constructed based on self-attention and spectrum normalization; on one hand, the auxiliary classifier introduced by the model uses the class label for auxiliary training, so that under the condition of fewer training set samples, the deviation of the network on the feature extraction and feature selection of the data is smaller, and the quality of the generated samples is higher; on the other hand, a self-attention module is introduced into the generator and the discriminator, so that the global information of the sample, important features of concerned data, effective learning key information and the efficiency and performance of a network can be captured, and the quality of the sample generated by the model is higher; meanwhile, the spectrum normalization method is used, so that the arbiter meets Lipschitz constraint, the gradient of the arbiter is limited in a constant range, the arbiter can be transmitted back to a generator to obtain a better gradient, and the problems of gradient elimination and gradient explosion of the GAN model are solved; finally, a hinge loss function is used in the network training process, and only samples which are not reasonably judged to be true or false can influence the gradient, so that the training process of the model is more stable. Therefore, compared with the existing HRRP sample generation method, the HRRP sample generated by the method provided by the invention is higher in quality.
Example two
On the basis of the first embodiment, the present embodiment provides a radar target recognition method. Referring to fig. 4, fig. 4 is a flowchart of a radar target recognition method according to an embodiment of the present invention, which includes:
s1: and constructing an HRRP sample set.
S2: training a network model of the radar target recognition system by using the HRRP sample set until the network converges;
s3: and inputting the sample to be identified into a trained radar target identification system network to obtain an identification result.
Specifically, step S1 of the present embodiment constructs an HRRP sample set by using the method provided in the first embodiment, which is not described in detail herein. Therefore, the HRRP sample generated by the method is higher in quality, so that the accuracy of target identification of the radar system can be improved.
Example III
The effects of the present invention will be further described by simulation tests.
1. Simulation conditions:
the hardware platform of the simulation experiment of the invention is: intel i7-10700 2.9GHz, 16GB memory, windows10 operating system and Python version 3.9.
2. Simulation content and result analysis:
the test is to respectively generate HRRP data by using the method and the existing method for expanding the HRRP identification database sample based on CACGAN, and then to expand samples in a small sample set by using the generated HRRP data to obtain an expanded training set after expansion of the method and an expanded training set after expansion of a CACGAN model.
And then, verifying the quality of HRRP samples in the training set after the expansion of the invention and the training set after the expansion of the CACGAN model by constructing a CNN classifier identification system. And inputting the samples of the small sample training set, the samples of the training set expanded by the method and the samples of the training set expanded by the CACGAN model into a CNN classifier identification system to obtain a trained CNN classifier. The samples of the test set generated by the simulation experiment are respectively input into a trained CNN classifier, and the prediction category of each sample in the test set is output.
One prior art HRRP identification database sample expansion method based on CACGAN used in simulation experiments refers to a HRRP identification database sample expansion method based on generation of an antagonism network CACGAN disclosed in patent document "HRRP identification database sample expansion method based on CACGAN" (patent application No. 202110283773.5, application publication No. 112784930 a) of Ma Pei. The method is used for generating HRRP data, adding the generated HRRP data into the identification database, and completing sample expansion of the identification database. The HRRP identification database sample expansion method based on CACGAN for short.
The identification database and the training set used in the simulation experiment are HRRP electromagnetic simulation data of 3 classes of aircrafts. The small sample training set contains 200 HRRP data types 1, 200 HRRP data types 2 and 200 HRRP data types 3. The test sample set contains 1600 HRRP data class 1, 1600 HRRP data class 2, 1600 HRRP data class 3. Each HRRP sample contains 256 distance units.
The simulation experiment of the invention is to generate HRRP data by using the sample expansion method of the invention to obtain the generated data set of the invention, wherein the generated data set comprises 1400 HRRP data of class 1, 1400 HRRP data of class 2 and 1400 HRRP data of class 3. And then the generated data set and the CACGAN generated data set are used for completing sample expansion of the small sample training set, so as to obtain an expanded training set after expansion of the CACGAN model and an expanded training set after expansion of the CACGAN model.
A five-layer CNN classifier identification system is built, and the structure of the CNN classifier identification system sequentially comprises a first convolution layer, a second convolution layer, a third convolution layer, a first full connection layer and a second full connection layer. The number of characteristic maps of the first convolution layer to the third convolution layer is set to be 32, 64 and 128 respectively, the convolution kernel sizes are all set to be 1 multiplied by 9, the convolution kernel sliding step sizes are all set to be 1, the pooling downsampling kernel sizes are all set to be 1 multiplied by 2, the downsampling kernel sliding step sizes are all set to be 2, the input dimensions of the first full connection layer and the second full connection layer are 4096 and 128 respectively, and the output dimensions are 128 and 3 respectively.
The training set of the small sample, the training set after the expansion of the invention and the training set after the expansion of the CACGAN are respectively input into the CNN classifier, and the three trained CNN classifiers are obtained after 300 times of iterative training. And respectively carrying out category prediction on each sample in the test set by using three CNN classifiers, and then respectively calculating the ratio of the number of test samples, the categories of which are predicted by each sample in the test set by the three CNN classifiers and the categories of the samples are consistent, to the total number of the test samples, so as to obtain three target recognition accuracy rates. The higher the target recognition accuracy is, the higher the recognition performance of the representative CNN classifier is, and the more complete the azimuth angle of the expanded training set HRRP sample is.
The results of the above three target recognition accuracy rates are shown in table 1.
TABLE 1 target recognition accuracy List
Training set | Recognition rate (%) |
Training set of small samples | 89.37 |
CACGAN extended training set | 90.04 |
Training set after expansion of the invention | 91.33 |
As can be seen from the simulation results of Table 1, the recognition performance of the CNN trained after sample expansion by the method is superior to that of the CNN trained after sample expansion by the existing method. The method shows that the generated HRRP sample is higher in quality, and the recognition performance of the CNN classification system trained after the generated HRRP carries out sample expansion on the small sample training set is high.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.
Claims (10)
1. The HRRP sample generation method based on the SACGAN model is characterized by comprising the following steps of:
step 1: generating a small sample training set based on the radar echo; the small sample training set comprises HRRP samples of a plurality of categories, and each HRRP sample is provided with a corresponding category label;
step 2: constructing a SACGAN model comprising a generator, a discriminator and an auxiliary classifier based on self-attention and spectrum normalization;
step 3: inputting the HRRP sample and the category label into the SACGAN model, and training the SACGAN model based on a hinge loss function;
step 4: and generating a sample expansion training set by using the trained SACGAN model.
2. The HRRP sample generation method based on the SACGAN model as claimed in claim 1, wherein step 1 comprises:
11 Extracting HRRP samples of a plurality of categories in radar returns along a distance dimension on a radar line of sight;
12 Setting class labels for each HRRP sample, and combining all the processed samples and the corresponding class labels into a small sample training set.
3. The HRRP sample generation method based on the SACGAN model as claimed in claim 1, wherein step 2 comprises:
21 Building a generator comprising a first full connection layer, four first convolution modules, two first self-attention modules and a first output layer;
the device comprises a first full-connection layer, a first convolution module, a first attention module, a second first convolution module, a second first attention module, a third first convolution module and a fourth first convolution module, which are sequentially connected to form a feature extraction part of a generator;
22 Building a discriminator comprising a second full connection layer, five second convolution modules and two second self-attention modules and a second output layer;
the first second convolution module, the second convolution module, the third second convolution module, the first second attention module, the fourth second convolution module, the second attention module and the fifth second convolution module are sequentially connected to form a feature extraction part of the discriminator;
23 Building an auxiliary classifier which comprises a feature extraction part and a third output layer; the third output layer comprises a full connection layer and a Softmax;
24 Combining the generator, the arbiter and the auxiliary classifier to obtain a SACGAN model.
4. The HRRP sample generation method based on the SACGAN model of claim 3 wherein the first convolution module comprises an upsampling layer, a convolution layer, a spectrum normalization layer, and a ReLU in that order; the sliding step length of the up-sampling layer is set to be 2, the convolution kernel sizes are all set to be 1 multiplied by 3, and the number of channels of the feature map is 512, 256, 128 and 64 in sequence;
the convolution kernel sizes used in the first self-attention module QKV are all 1×1.
5. The HRRP sample generation method based on the SACGAN model of claim 3 wherein the second convolution module comprises a convolution layer, a spectrum normalization layer, and a LeakyReLU in that order; the size of the convolution kernel is set to be 1 multiplied by 3, and the number of the characteristic diagram channels is sequentially 1, 32, 64, 128 and 256;
the convolution kernel sizes used in the second self-attention module QKV are all 1×1.
6. The HRRP sample generation method based on the SACGAN model of claim 1, further comprising, after step 2, before step 3:
the learning rate of the generator was set to 0.0002, and the learning rate of the discriminator was set to 0.0005.
7. The HRRP sample generation method based on the SACGAN model as claimed in claim 1, wherein step 3 comprises:
31 Randomly generating a plurality of embedded noise samples with labels based on Gaussian distribution and taking the embedded noise samples as input of a generator in a SACGAN model to obtain a generated sample set of a specified category;
32 Inputting the generated samples in the generated sample set and the HRRP samples in the small sample training set into a discriminator of a SACGAN model to output the probability that each sample is judged to be a true sample; meanwhile, the generated samples in the generated sample set and the HRRP samples in the small sample training set are input into an auxiliary classifier of the SACGAN model, and the probability that each sample is classified into each class is output;
33 Respectively calculating a loss function value of a discriminator in the SACGAN model and a loss function value of a generator in the current iteration;
34 Based on Adam method, the loss value of the arbiter and the loss value of the generator in the SACGAN model are used for updating the network parameters in sequence during the current iteration until convergence, and the network parameters of the generator in the SACGAN model are stored after training is completed.
8. The HRRP sample generation method based on the SACGAN model of claim 7 wherein the loss function value of the arbiter is calculated by the formula:
wherein L is D Representing the loss value of the discriminator in the current iteration process, and determining the loss and classifying the loss L c Composition, x, represents a set of real samples sampled from a small sample training set, p r Representing the distribution of real data, p z Representing a gaussian distribution, y representing a class label corresponding to each sample, z representing a set of noise samples sampled from the gaussian distribution, C (y|x) representing a probability of the auxiliary classifier correctly classifying the input real samples, G (z|y) representing a set of generated samples generated by the generator, C (y|g (z|y)) representing a probability of the classifier correctly classifying the input generated samples, D (x|y) representing a probability of the discriminator judging the input real samples as true, D (G (x|y)) representing a probability of the discriminator judging the input generated samples as true, E (·) representing a desired operation;
the calculation formula of the loss function value of the generator is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,L G representing the loss value of the generator in the current iteration process, z represents a group of noise samples sampled from the Gaussian distribution, y represents the class label corresponding to each sample, and p z Representing a gaussian distribution, G (z|y) represents a set of generated samples generated by the generator, C (y|g (y|z)) represents the probability that the classifier correctly classifies the input generated samples, and E (·) represents the desired processing.
9. The HRRP sample generation method based on the SACGAN model of claim 7 wherein step 4 comprises:
41 Initializing a generator in the SACGAN model with the trained network parameters;
42 Randomly generating noise samples with the same number as each category of targets in the small sample training set from Gaussian distribution, and inputting the noise samples and the corresponding category labels into a generator to generate a generated sample set;
43 Combining the generated sample set with the small sample training set to form an extended sample training set.
10. A method for radar target identification, comprising:
constructing a HRRP sample set using the method of any one of claims 1-9;
training a network model of a radar target recognition system by using the HRRP sample set until the network converges;
and inputting the sample to be identified into a trained radar target identification system network to obtain an identification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310282003.8A CN116522133A (en) | 2023-03-21 | 2023-03-21 | HRRP sample generation method based on SACGAN model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310282003.8A CN116522133A (en) | 2023-03-21 | 2023-03-21 | HRRP sample generation method based on SACGAN model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116522133A true CN116522133A (en) | 2023-08-01 |
Family
ID=87407216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310282003.8A Pending CN116522133A (en) | 2023-03-21 | 2023-03-21 | HRRP sample generation method based on SACGAN model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116522133A (en) |
-
2023
- 2023-03-21 CN CN202310282003.8A patent/CN116522133A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109766835B (en) | SAR target recognition method for generating countermeasure network based on multi-parameter optimization | |
Veldkamp et al. | Statistical postprocessing of wind speed forecasts using convolutional neural networks | |
CN109190491B (en) | Sea ice classification method for residual convolutional neural network SAR (synthetic Aperture Radar) image | |
CN112784930B (en) | CACGAN-based HRRP identification database sample expansion method | |
CN108764310B (en) | SAR target recognition method based on multi-scale multi-feature depth forest | |
CN104732244A (en) | Wavelet transform, multi-strategy PSO (particle swarm optimization) and SVM (support vector machine) integrated based remote sensing image classification method | |
CN109948722B (en) | Method for identifying space target | |
CN113240047A (en) | SAR target recognition method based on component analysis multi-scale convolutional neural network | |
CN112684427A (en) | Radar target identification method based on serial quadratic reinforcement training | |
CN112946600B (en) | Method for constructing radar HRRP database based on WGAN-GP | |
CN106951822B (en) | One-dimensional range profile fusion identification method based on multi-scale sparse preserving projection | |
CN104700116A (en) | Polarized SAR (synthetic aperture radar) image object classifying method based on multi-quantum ridgelet representation | |
CN112965062A (en) | Radar range profile target identification method based on LSTM-DAM network | |
CN113095417B (en) | SAR target recognition method based on fusion graph convolution and convolution neural network | |
CN113239959B (en) | Radar HRRP target identification method based on decoupling characterization variation self-encoder | |
CN116206203B (en) | Oil spill detection method based on SAR and Dual-EndNet | |
CN106203520A (en) | SAR image sorting technique based on degree of depth Method Using Relevance Vector Machine | |
CN116030300A (en) | Progressive domain self-adaptive recognition method for zero-sample SAR target recognition | |
CN116522133A (en) | HRRP sample generation method based on SACGAN model | |
CN115205602A (en) | Zero-sample SAR target identification method based on optimal transmission distance function | |
CN114818845A (en) | Noise-stable high-resolution range profile feature selection method | |
Wang et al. | Multitype label noise modeling and uncertainty-weighted label correction for concealed object detection | |
CN110135280B (en) | Multi-view SAR automatic target recognition method based on sparse representation classification | |
CN114137518A (en) | Radar high-resolution range profile open set identification method and device | |
Zhang et al. | An Incremental Recognition Method for MFR Working Modes Based on Deep Feature Extension in Dynamic Observation Scenarios |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |