CN109766835B - SAR target recognition method for generating countermeasure network based on multi-parameter optimization - Google Patents

SAR target recognition method for generating countermeasure network based on multi-parameter optimization Download PDF

Info

Publication number
CN109766835B
CN109766835B CN201910026176.7A CN201910026176A CN109766835B CN 109766835 B CN109766835 B CN 109766835B CN 201910026176 A CN201910026176 A CN 201910026176A CN 109766835 B CN109766835 B CN 109766835B
Authority
CN
China
Prior art keywords
network
discriminator
generator
parameters
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910026176.7A
Other languages
Chinese (zh)
Other versions
CN109766835A (en
Inventor
杜兰
郭昱辰
何浩男
陈健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Xian Cetc Xidian University Radar Technology Collaborative Innovation Research Institute Co Ltd
Original Assignee
Xidian University
Xian Cetc Xidian University Radar Technology Collaborative Innovation Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University, Xian Cetc Xidian University Radar Technology Collaborative Innovation Research Institute Co Ltd filed Critical Xidian University
Priority to CN201910026176.7A priority Critical patent/CN109766835B/en
Publication of CN109766835A publication Critical patent/CN109766835A/en
Application granted granted Critical
Publication of CN109766835B publication Critical patent/CN109766835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a Synthetic Aperture Radar (SAR) target recognition method for generating a countermeasure network based on multi-parameter optimization, which mainly solves the problems that the recognition rate is low during classifier training and the classifier parameters obtained by training cannot be guaranteed to be optimal solutions in the prior art. The implementation scheme is as follows: generating an initial training sample set and a test sample set, and expanding the initial training sample to generate a final training sample set; setting the structure and parameter group number of the generated countermeasure network; training by adopting a method of cross training of multiple groups of network parameters to generate a confrontation network, and training by using a training set sample and a pseudo sample generated by a generator to generate a discriminator in the confrontation network; and identifying the target model by using the trained multiple groups of discriminators in the generation countermeasure network, adding the results obtained by the multiple groups of discriminators, and averaging to obtain the identification result of the target model. The invention improves the accuracy of SAR target identification and can be used for identifying static SAR targets.

Description

SAR target recognition method for generating confrontation network based on multi-parameter optimization
Technical Field
The invention belongs to the technical field of communication, and further relates to a synthetic aperture radar SAR target model identification method which can be used for identifying the model of a static target in a synthetic aperture radar SAR target.
Background
The synthetic aperture radar SAR has the characteristics of all weather, all time, high resolution, strong penetrating power and the like, becomes an important means for earth observation and military reconnaissance at present, and the automatic target identification of the synthetic aperture radar SAR image is more and more widely concerned. At present, the synthetic aperture radar SAR target recognition method mostly only adopts original training data when training a classifier; the local optimal solution is mostly obtained by adopting a depth model on the design of the classifier.
An electronic technology university proposes an SAR target recognition method based on sparse representation in a patent document 'SAR image recognition method' (patent application number CN201210201460.1, publication number CN 102737253A) applied by the university of electronic technology. The method comprises the following steps: target data are expressed as linear combination of training samples by using a sparse representation theory, approximate non-negative sparse coefficients with distinguishable capability are obtained by solving an optimization problem, and then the class of the samples is determined based on the sum of the class coefficients. The method utilizes the similarity degree of the target data and the training samples as the basis of classification to reflect the real category of the target data. The method has the disadvantage that the classification model is trained by simply utilizing the original training data.
The university of west ann electronic technology proposes a CNN-based SAR target recognition method in the patent document "CNN-based SAR target recognition method" (patent application No. cn201510165886.X, application publication No. CN 104732243A). The method comprises the following implementation steps: carrying out multiple random translation transformations on each training image to obtain expansion data, and expanding the expansion data into a training sample set; building a Convolutional Neural Network (CNN) result; inputting the expanded training sample set into a CNN to train a network model; performing multiple translation transformations on the test sample to obtain an expanded test sample set; and inputting the test sample set into the trained CNN network model for testing to obtain the recognition rate of the CNN network model. The method has the defects that the deep learning method inevitably falls into the local optimal solution, the obtained model after training cannot be guaranteed to be the optimal solution, and the results obtained by training in different prior setting and initialization modes are unstable.
Disclosure of Invention
The invention aims to provide an SAR target recognition method for generating a countermeasure network based on multi-parameter optimization aiming at the defects in the prior art so as to stabilize the recognition performance and improve the recognition rate.
The technical idea of the invention is that a sample image similar to a training sample set is generated by using a generation model, and available data and information during training of a classifier are increased; when the confrontation model is generated through training, multiple groups of parameters are simultaneously and jointly trained, the average result obtained by the multiple groups of parameters is used as the final prediction result, the problem that the model falls into the local optimal solution is avoided, the stability and the accuracy of model identification are improved, and the implementation scheme comprises the following steps:
(1) Generating a training sample set and a testing sample set:
(1a) Randomly acquiring at least 200 images of each type in all types of the synthetic aperture radar SAR image set to form an initial training sample set, and forming a test sample set by using all residual samples;
(1b) Performing data expansion on each image in the initial training sample set through translation, rotation and turning to obtain an expanded training sample set, and forming a final training sample set by the initial training sample set and the expanded training sample set;
(2) Setting the number of structures and parameter groups for generating the countermeasure network:
respectively setting the number of layers of a generator and an arbiter in a generated countermeasure network and the number of convolution kernels of each layer in tensoflow software, and setting the number of groups of network parameters according to required precision;
(3) Generating an antagonistic network for training:
(3a) Fixing parameters of a discriminator, randomly generating a group of noise vectors, inputting the noise vectors into a generator to obtain a group of generated pseudo samples, inputting the pseudo samples into the discriminator, and updating the parameters of the generator by minimizing a target function of the generator;
(3b) Fixing the parameters of a generator, randomly generating a group of noise vectors, inputting the noise vectors into the generator to obtain a group of generated pseudo samples, inputting the pseudo samples and a training data set into a discriminator together, and updating the parameters of the discriminator through a target function of a maximized discriminator;
(3c) Judging whether the objective functions of the generator and the discriminator are converged: if the target function is not converged, returning to the step (3 a); if the target function is converged, stopping network training to obtain a trained generated confrontation network;
(4) Identifying the target model by using the trained generated countermeasure network:
(4a) All samples in the test sample set are respectively input into the discriminators corresponding to each group of trained parameters to obtain the output vector y of each discriminator m
(4b) Output vector y for each discriminator m And adding and averaging, wherein the model type corresponding to the dimension with the largest mean value of the average vector is the model identification result of the test sample.
Compared with the prior art, the invention has the following advantages:
firstly, when the classifier in the classification network is trained, namely the classifier in the countermeasure network is generated, the pseudo samples generated by the generator are utilized besides the training set samples, so that the problem of low recognition rate caused by only utilizing the training samples when the classifier is trained in the prior art is solved, more information is utilized by the classifier in the training, the training is more sufficient, the image classification capability is stronger, and the SAR target recognition accuracy is improved.
Secondly, the method adopts a method of cross training of multiple groups of network parameters to train and generate the confrontation network, and takes an average result obtained by the multiple groups of parameters as a final result of target model identification, so that the problem that a classifier inevitably falls into a local optimal solution in the prior art, and the network parameters cannot be guaranteed to be the optimal solution after training is solved, the method not only has the characteristic of robustness to different initialization modes, but also improves the target identification rate of the SAR image.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a schematic diagram of a SAR image used in the present invention;
fig. 3 is a simulation diagram of a pseudo sample generated by the generator of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, the implementation steps of this example are as follows.
Step 1, generating a training sample set and a testing sample set.
Randomly acquiring at least 200 images of each type in all types of the synthetic aperture radar SAR image set to form an initial training sample set, and forming a test sample set by using all residual samples;
respectively moving each picture in the initial training sample set upwards, downwards, leftwards and rightwards by 30 pixel points to obtain a 4-time translation expansion sample;
performing image rotation on each picture in the initial training sample set according to the angles of 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees and 315 degrees clockwise respectively to obtain a 7-time rotation expansion sample;
turning each picture in the initial training sample set from left to right and from top to bottom respectively to obtain 2 times of turning expansion samples;
and combining the initial training sample set, the translation expansion sample, the rotation expansion sample and the turnover expansion sample into a final training sample set.
And 2, setting the structure and parameter group number of the generated countermeasure network.
The existing generation countermeasure network is a deep network model composed of a generator and an arbiter.
In the example, the number of layers of a generator and an arbiter in a generation countermeasure network and the number of convolution kernels of each layer are respectively set in tensoflow software, the number of groups of network parameters is set according to required precision, the network parameters of the generator are set into N groups, and the network parameters of the arbiter are set into M groups; the precision of the generator network is in direct proportion to N, namely the larger N is, the higher the precision of the generator network is, and the precision of the discriminator network is in direct proportion to M, namely the larger M is, the higher the precision of the discriminator network is, and N is more than 0,M and more than 0.
And 3, training the generation countermeasure network.
3.1 Network parameters of the training generator:
fixing parameters of a discriminator, randomly generating a group of noise vectors, inputting the noise vectors into a generator to obtain a group of generated pseudo samples, inputting the pseudo samples into the discriminator, and updating the parameters of the generator by minimizing a target function of the generator;
the objective function of the generator is expressed as follows:
Figure BDA0001942573630000041
wherein G represents a generator network, D represents a discriminator network, z is noise, p z (z) is a priori distribution of z, E (·) represents a calculation expectation value, G (z) represents a pseudo sample output after noise z is input into the generator network G, K is the total class number of the training sample set, K +1 is a class label corresponding to the pseudo sample, p (y = K +1|G (z), D) represents a K + 1-dimensional value of an output vector when the input of the discriminator network D is G (z);
for N groups of generator network parameters, the objective functions of the generators respectively corresponding to the N groups of generator network parameters are as follows:
Figure BDA0001942573630000042
wherein
Figure BDA0001942573630000043
Parameters representing the generator network, G n Indicating that the network parameter is asserted by the generator>
Figure BDA0001942573630000044
Respectively formed generator network, D m Represents a discriminator network composed of M discriminator network parameters, respectively, N =1,2., N, M =1,2., M;
3.2 Network parameters for training the discriminators):
fixing the parameters of a generator, randomly generating a group of noise vectors, and inputting the noise vectors into the generator to obtain a group of generated pseudo samples; inputting the pseudo sample and the training data set into a discriminator together, and updating parameters of the discriminator through a target function of the maximization discriminator;
the objective function of the discriminator is expressed as follows:
Figure BDA0001942573630000045
wherein, G represents a generator network, D represents a discriminator network, x represents a real sample, y = l represents a label of the real sample, l =1,2,. The.. K, K is the total class number of the training samples, p data (x, y) represents the joint distribution of the sample and the label, p (y = l | x, D) represents the l-th dimension of the output of the discriminator network after the real sample x is input into the discriminator network, z is noise, p is z (z) is a priori distribution of z, E (·) represents a calculation expectation value, G (z) represents a pseudo sample output after inputting noise z into the generator network G, K +1 is a class label corresponding to the pseudo sample, p (y = K +1|G (z), D) represents a value of K + 1-th dimension of an output vector when the discriminator network D inputs G (z);
for the M groups of arbiter network parameters, the objective functions of their corresponding arbiters are as follows:
Figure BDA0001942573630000051
wherein the content of the first and second substances,
Figure BDA0001942573630000052
parameters representing a network of discriminators, G n Representing a generator network formed by N generator network parameters, D m Indicating that a network parameter is picked up by a discriminator>
Figure BDA0001942573630000053
A network of discriminators, N =1,2,.. N, M =1,2,. M;
3.3 Determine whether the generator and arbiter objective functions converge: if the target function is not converged, return to 3.1); and if the target function is converged, stopping network training to obtain a trained generated confrontation network.
And 4, identifying the target model by using the trained generation countermeasure network.
4.1 All samples in the test sample set are respectively transmittedInputting the parameters into the discriminants corresponding to each group of trained parameters to obtain the output vector y of each discriminant m
4.2 Output vector y for each discriminator m And adding and averaging, wherein the model type corresponding to the dimension with the largest mean value of the average vector is the model identification result of the test sample, and the model identification result is expressed as follows:
Figure BDA0001942573630000054
Figure BDA0001942573630000055
wherein, y m Expressing K-dimensional output vectors of the mth discriminator, wherein each dimension represents the probability of classifying the test sample into the type class by the discriminator;
Figure BDA0001942573630000056
an average vector obtained by averaging the ym is M =1,2, M is the number of sets of discriminator parameters, findmax (·) represents that the maximum value of the search vector corresponds to the dimension, and is/is greater than>
Figure BDA0001942573630000061
Represents a vector pick>
Figure BDA0001942573630000062
And the dimension of the maximum value is the model identification result of the test sample.
The effect of the present invention will be further described with reference to simulation experiments.
1. And (5) simulating experimental conditions.
The hardware platform of the simulation experiment of the invention is as follows: the processor Intel Xeon CPU has a main frequency of 2.20GHz, a memory of 128GB, a video card of NVIDIA GTX 1080Ti, an operating system of ubuntu 16.04LTS, and used software of python2.7 and tensorflow.
The existing methods used are: the method comprises the steps of an object recognition method SVM based on a linear support vector machine classifier, an object recognition method AE based on a self-encoder and an object recognition method RBM based on a limiting Boltzmann machine.
2. And (5) simulating the experiment content.
Simulation experiment 1, adopting the method of the present invention, training network parameters by using the measured data of the MSTAR dataset obtained and recognized by the moving and static targets, generating a pseudo sample by using a generator composed of two groups of trained parameters, and the result is shown in FIG. 3, wherein:
FIG. 3 (a) is a pseudo sample image generated by a generator constructed with a first set of trained parameters after inputting a set of noise;
fig. 3 (b) is a pseudo sample image generated by a generator constructed with a second set of trained parameters after a set of noise is input.
In the simulation experiment 2, the method and three existing methods are adopted to carry out target model identification on the actual measurement data in the MSTAR data set for acquisition and identification of moving and static targets, and identification results of various methods for test samples are obtained. To evaluate the simulation experiment results, the test sample recognition rate for each of the above simulation experiments was calculated using the following formula:
Figure BDA0001942573630000063
wherein, accuracy represents the recognition rate of the test samples, T represents the number of correctly recognized test samples, and Q represents the total number of test samples. The larger the Accuracy value is, the better the recognition performance is.
The recognition rates of the three methods adopted in the above simulation experiment are shown in table 1.
TABLE 1 MSTAR test sample identification rate comparison table corresponding to different identification methods
Experimental methods The method of the invention SVM AE RBM
Recognition rate 95.47% 88.64% 86.81% 87.84%
3. And (5) analyzing a simulation result.
The comparative reference standard for analyzing this simulation experiment 1 is the SAR image shown in fig. 2, in which:
FIG. 2 (a) is a BMP2 armored car measured data image selected randomly from the MSTAR dataset;
FIG. 2 (b) is a diagram of an actual measurement data image of a BTR70 armored car randomly selected from the MSTAR data set;
fig. 2 (c) is an image of measured data of a T72 main battle tank randomly selected from the MSTAR data set.
By comparing fig. 3 (a) with fig. 2 (a), fig. 2 (b) and fig. 2 (c), it can be seen that the pseudo sample image generated by the generator composed of the first trained set of parameters after inputting a set of noise is very close to the real MSTAR sample;
by comparing fig. 3 (b) with fig. 2 (a), fig. 2 (b) and fig. 2 (c), it can be seen that the pseudo sample image generated by the generator consisting of the second set of trained parameters after inputting a set of noise is very close to the real MSTAR sample.
The comparison result shows that the addition of the dummy samples shown in fig. 3 (a) and fig. 3 (b) to the training of the discriminator can increase useful information available to the discriminator.
The analysis simulation experiment 2 shows that, as shown in table 1, the recognition rate of the invention can reach 95.47%, and compared with the prior art, the method has the highest recognition rate.

Claims (6)

1. A SAR target recognition method for generating a confrontation network based on multi-parameter optimization is characterized by comprising the following steps:
(1) Generating a training sample set and a testing sample set:
(1a) Randomly acquiring at least 200 images of each type in all types of the synthetic aperture radar SAR image set to form an initial training sample set, and forming a test sample set by using all residual samples;
(1b) Performing data expansion on each image in the initial training sample set through translation, rotation and turning to obtain an expanded training sample set, and forming a final training sample set by the initial training sample set and the expanded training sample set;
(2) Setting the number of structures and parameter groups for generating the countermeasure network:
respectively setting the number of layers of a generator and an arbiter in a generated countermeasure network and the number of convolution kernels of each layer in tensoflow software, and setting the group number of network parameters according to required precision, namely setting the group number of the generator and the arbiter corresponding to the network parameters respectively;
(3) Training the generation of the antagonistic network:
(3a) Fixing parameters of a discriminator, randomly generating a group of noise vectors, inputting the noise vectors into a generator to obtain a group of generated pseudo samples, inputting the pseudo samples into the discriminator, and updating the parameters of the generator by minimizing a target function of the generator;
(3b) Fixing the parameters of a generator, randomly generating a group of noise vectors, inputting the noise vectors into the generator to obtain a group of generated pseudo samples, inputting the pseudo samples and a training data set into a discriminator together, and updating the parameters of the discriminator through a target function of a maximized discriminator;
(3c) Judging whether the objective functions of the generator and the discriminator are converged: if the target function is not converged, returning to the step (3 a); if the target function is converged, stopping network training to obtain a trained generated confrontation network;
(4) Identifying the target model by using the trained generated countermeasure network:
(4a) All samples in the test sample set are respectively input into the discriminators corresponding to each group of trained parameters to obtain the output vector y of each discriminator m
(4b) Output vector y for each discriminator m Adding the obtained values and averaging the obtained values to obtain an average vector; and the type of the target model corresponding to the dimension with the maximum mean value of the average vector is the model identification result of the test sample.
2. The method of claim 1, wherein: (1b) The data expansion is carried out on each image in the initial training sample set through translation, rotation and overturning, and the device is realized as follows:
(1b1) Respectively moving each picture in the initial training sample set upwards, downwards, leftwards and rightwards by 30 pixel points to obtain a 4-time translation expansion sample;
(1b2) Performing image rotation on each picture in the initial training sample set according to the clockwise angles of 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees and 315 degrees respectively to obtain 7 times of rotation expansion samples;
(1b3) And (3) respectively turning each picture in the initial training sample set from left to right and from top to bottom to obtain 2-time turning expansion samples.
3. The method of claim 1, wherein: (2) The number of the groups of the network parameters is set, namely the network parameters of the generator are set into N groups, and the network parameters of the discriminator are set into M groups; the precision of the generator network is in direct proportion to N, the precision of the discriminator network is in direct proportion to M, and N is more than 0,M and is more than 0.
4. The method of claim 1, wherein: the objective function of the generator in (3 a), expressed as follows:
Figure FDA0003977357350000021
where G denotes the generator network, D denotes the discriminator network, z is noise, p z (z) is a priori distribution of z, E (·) represents a calculation expectation, G (z) represents a pseudo sample output after noise z is input into the generator network G, K is the total number of classes of the training sample set, K +1 is a class label corresponding to the pseudo sample, p (y = K +1|G (z)), and D represents a value of the K + 1-th dimension of an output vector when the input of the discriminator network D is G (z);
for N groups of generator network parameters, the objective functions of the corresponding generators are as follows:
Figure FDA0003977357350000022
wherein
Figure FDA0003977357350000023
Parameters representing the generator network, G n Indicating that the network parameter is asserted by the generator>
Figure FDA0003977357350000024
Respectively formed generator network, D m Denotes a discriminator network composed of M discriminator network parameters, N =1,2,.., N, M =1,2,., M.
5. The method of claim 1, wherein: the objective function of the discriminator in (3 b) is expressed as follows:
Figure FDA0003977357350000031
wherein G represents a generator network, D represents a discriminator network, x represents a real sample, y = l represents a label of the real sample, l =1,2Number of classes, p data (x, y) represents the joint distribution of the sample and label, p (y = l | x, D) represents the value of l-th dimension of the output of the discriminator network after the real sample x is input into the discriminator network, z is noise, p is z (z) is a priori distribution of z, E (·) represents a calculation expectation value, G (z) represents a pseudo sample output after inputting noise z into the generator network G, K +1 is a class label corresponding to the pseudo sample, p (y = K +1|G (z), D) represents a value of K + 1-th dimension of an output vector when the discriminator network D inputs G (z);
for the M groups of arbiter network parameters, the corresponding arbiter objective functions are as follows:
Figure FDA0003977357350000032
wherein the content of the first and second substances,
Figure FDA0003977357350000033
parameters representing the arbiter network, G n Representing a generator network formed by N generator network parameters, D m Indicating that the network parameter is asserted by the arbiter>
Figure FDA0003977357350000034
A network of discriminators, N =1,2., N, M =1,2., M, respectively.
6. The method of claim 1, wherein: (4b) The identification result of the model of the test sample obtained in (1) is expressed as follows:
Figure FDA0003977357350000035
Figure FDA0003977357350000036
wherein, y m The output vector of the m-th discriminator is expressed asK-dimensional vectors, each dimension representing the probability of classifying the test sample into the type class by the discriminator;
Figure FDA0003977357350000041
is y m Adding an average vector obtained after averaging, wherein M =1,2,.. The M is the number of sets of discriminator parameters, findmax (·) represents that the maximum value of the search vector corresponds to the dimension,. The &' s>
Figure FDA0003977357350000042
Represents a vector pick>
Figure FDA0003977357350000043
And the dimension of the maximum value is the model identification result of the test sample. />
CN201910026176.7A 2019-01-11 2019-01-11 SAR target recognition method for generating countermeasure network based on multi-parameter optimization Active CN109766835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910026176.7A CN109766835B (en) 2019-01-11 2019-01-11 SAR target recognition method for generating countermeasure network based on multi-parameter optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910026176.7A CN109766835B (en) 2019-01-11 2019-01-11 SAR target recognition method for generating countermeasure network based on multi-parameter optimization

Publications (2)

Publication Number Publication Date
CN109766835A CN109766835A (en) 2019-05-17
CN109766835B true CN109766835B (en) 2023-04-18

Family

ID=66453973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910026176.7A Active CN109766835B (en) 2019-01-11 2019-01-11 SAR target recognition method for generating countermeasure network based on multi-parameter optimization

Country Status (1)

Country Link
CN (1) CN109766835B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516525B (en) * 2019-07-01 2021-10-08 杭州电子科技大学 SAR image target recognition method based on GAN and SVM
CN110555811A (en) * 2019-07-02 2019-12-10 五邑大学 SAR image data enhancement method and device and storage medium
CN110472627B (en) * 2019-07-02 2022-11-08 五邑大学 End-to-end SAR image recognition method, device and storage medium
CN110297218B (en) * 2019-07-09 2022-07-15 哈尔滨工程大学 Method for detecting unknown modulation mode of radar signal based on generation countermeasure network
CN110401488B (en) * 2019-07-12 2021-02-05 北京邮电大学 Demodulation method and device
CN110609477B (en) * 2019-09-27 2021-06-29 东北大学 Electric power system transient stability discrimination system and method based on deep learning
CN111126503B (en) * 2019-12-27 2023-09-26 北京同邦卓益科技有限公司 Training sample generation method and device
CN111398955B (en) * 2020-03-13 2022-04-08 中国科学院电子学研究所苏州研究院 SAR image sidelobe removing method based on generation of antagonistic neural network
CN112766381B (en) * 2021-01-22 2023-01-24 西安电子科技大学 Attribute-guided SAR image generation method under limited sample
CN112949820B (en) * 2021-01-27 2024-02-02 西安电子科技大学 Cognitive anti-interference target detection method based on generation of countermeasure network
CN113537031B (en) * 2021-07-12 2023-04-07 电子科技大学 Radar image target identification method for generating countermeasure network based on condition of multiple discriminators
CN113723182A (en) * 2021-07-21 2021-11-30 西安电子科技大学 SAR image ship detection method under limited training sample condition
CN115277189B (en) * 2022-07-27 2023-08-15 中国人民解放军海军航空大学 Unsupervised intrusion flow detection and identification method based on generation type countermeasure network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368852A (en) * 2017-07-13 2017-11-21 西安电子科技大学 A kind of Classification of Polarimetric SAR Image method based on non-down sampling contourlet DCGAN
CN107563428A (en) * 2017-08-25 2018-01-09 西安电子科技大学 Classification of Polarimetric SAR Image method based on generation confrontation network
CN108564115A (en) * 2018-03-30 2018-09-21 西安电子科技大学 Semi-supervised polarization SAR terrain classification method based on full convolution GAN
CN108764173A (en) * 2018-05-31 2018-11-06 西安电子科技大学 The hyperspectral image classification method of confrontation network is generated based on multiclass

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10262236B2 (en) * 2017-05-02 2019-04-16 General Electric Company Neural network training image generation system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368852A (en) * 2017-07-13 2017-11-21 西安电子科技大学 A kind of Classification of Polarimetric SAR Image method based on non-down sampling contourlet DCGAN
CN107563428A (en) * 2017-08-25 2018-01-09 西安电子科技大学 Classification of Polarimetric SAR Image method based on generation confrontation network
CN108564115A (en) * 2018-03-30 2018-09-21 西安电子科技大学 Semi-supervised polarization SAR terrain classification method based on full convolution GAN
CN108764173A (en) * 2018-05-31 2018-11-06 西安电子科技大学 The hyperspectral image classification method of confrontation network is generated based on multiclass

Also Published As

Publication number Publication date
CN109766835A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109766835B (en) SAR target recognition method for generating countermeasure network based on multi-parameter optimization
Cui et al. Image data augmentation for SAR sensor via generative adversarial nets
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN107229918B (en) SAR image target detection method based on full convolution neural network
CN107103338B (en) SAR target recognition method integrating convolution features and integrated ultralimit learning machine
CN109902715B (en) Infrared dim target detection method based on context aggregation network
CN108764310B (en) SAR target recognition method based on multi-scale multi-feature depth forest
Wan et al. Recognizing the HRRP by combining CNN and BiRNN with attention mechanism
CN112990334A (en) Small sample SAR image target identification method based on improved prototype network
CN113610151B (en) Small sample image classification system based on prototype network and self-encoder
CN107862680B (en) Target tracking optimization method based on correlation filter
CN106485651A (en) The image matching method of fast robust Scale invariant
CN105913083A (en) Dense SAR-SIFT and sparse coding-based SAR classification method
CN109255339B (en) Classification method based on self-adaptive deep forest human gait energy map
CN113240047A (en) SAR target recognition method based on component analysis multi-scale convolutional neural network
CN109801208B (en) SAR image change detection method based on multi-GPU task optimization
CN106951822B (en) One-dimensional range profile fusion identification method based on multi-scale sparse preserving projection
Yu et al. Application of a convolutional autoencoder to half space radar hrrp recognition
CN109766899B (en) Physical feature extraction and SVM SAR image vehicle target recognition method
Li et al. SAR image object detection based on improved cross-entropy loss function with the attention of hard samples
CN114943889A (en) SAR image target identification method based on small sample incremental learning
CN111046861B (en) Method for identifying infrared image, method for constructing identification model and application
CN107403136A (en) The SAR target model recognition methods of dictionary learning is kept based on structure
Zhou et al. Complex background SAR target recognition based on convolution neural network
Nie et al. LFC-SSD: Multiscale aircraft detection based on local feature correlation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant