CN111077523B - Inverse synthetic aperture radar imaging method based on generation countermeasure network - Google Patents

Inverse synthetic aperture radar imaging method based on generation countermeasure network Download PDF

Info

Publication number
CN111077523B
CN111077523B CN201911280745.7A CN201911280745A CN111077523B CN 111077523 B CN111077523 B CN 111077523B CN 201911280745 A CN201911280745 A CN 201911280745A CN 111077523 B CN111077523 B CN 111077523B
Authority
CN
China
Prior art keywords
network
image
gan
isar
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911280745.7A
Other languages
Chinese (zh)
Other versions
CN111077523A (en
Inventor
汪玲
李泽
胡长雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201911280745.7A priority Critical patent/CN111077523B/en
Publication of CN111077523A publication Critical patent/CN111077523A/en
Application granted granted Critical
Publication of CN111077523B publication Critical patent/CN111077523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a reverse synthetic aperture radar imaging method based on a generation countermeasure network, wherein a GAN consists of a generator network and a discriminator network; the generator network extracts feature representation and keeps low-dimensional feature information by using the convolution layer and the residual error network module, and reconstructs an ISAR target image by using the deconvolution layer. The discriminator network extracts characteristic information from the ISAR image output by the generator network by using the convolution layer, and realizes the authenticity discrimination of the ISAR image. In the network training stage, parameters of each layer in the generator network and the discriminator network are updated by using training errors output by the discriminator network. The trained generator network is separated from the GAN for undersampled ISAR data imaging. In the imaging stage, a low-quality target image obtained by undersampled ISAR target echo data by adopting a range-Doppler RD method is input into a generator network, and the corresponding output is a high-quality ISAR target image. The imaging quality and the calculation efficiency of the invention are superior to those of the traditional range-Doppler imaging method and the compressed sensing imaging result.

Description

Inverse synthetic aperture radar imaging method based on generation countermeasure network
Technical Field
The invention relates to the technical field of Radar signal processing, in particular to an Inverse Synthetic Aperture Radar (ISAR) imaging method based on a generated countermeasure Network (GAN).
Background
The inverse synthetic aperture radar is a typical imaging radar system, is mainly used for acquiring high-resolution images of non-cooperative moving targets, and is an effective target identification means. The conventional radar imaging method is a Range Doppler (RD) imaging method, and obtains high azimuth resolution by using Doppler modulation echo signals within Coherent accumulation time, i.e., Coherent Processing time (CPI).
Professor Baraniuk et al, 2007, introduced (Compressive Sensing, CS) theory into the field of radar imaging. Since then, CS-based ISAR imaging methods have received increasing attention from both domestic and foreign scholars. The ISAR imaging method based on the CS can reduce the complexity of a radar system and utilizes very little data for imaging. As the CS ISAR imaging method emphasizes the reconstruction of the scattering points of the target area, the corresponding imaging result has the advantages of high contrast and few side lobes, and is beneficial to subsequent image analysis and target identification. But the performance of the CS ISAR imaging method is still limited by the problems of inaccurate sparse representation, low reconstruction method efficiency and the like.
Since 2012, Deep Learning (DL) technology has been of interest to researchers and has begun to highlight significant information processing capabilities in some application areas, such as computer vision. The method achieves the achievement which cannot be achieved by the traditional machine learning method in a plurality of computer vision tasks such as image classification, image target detection and tracking, image reconstruction and the like. Most of the image reconstruction work is expanded around improving the optical image reconstruction and the medical image reconstruction quality. In the optical image reconstruction task, the cascade auto-encoder proposed by Baraniuk et al, the cyclic enhancement network proposed by Dave and deeplversenet, ReconNet et al, all achieve the image reconstruction performance of interest. In the task of medical image reconstruction, a depth residual error network proposed by Han et al, a CNN with a multi-level hierarchical structure proposed by Kyong Hwan Jin et al, a CNN with depth cascade proposed by Schlemper et al and an ADMM network (Basic-ADMM-Net) proposed by Yang et al all obtain imaging results superior to the CS method.
With the obvious advantages of the DL technology, researchers begin to explore applications in the DL remote sensing field and try to provide a new approach for task solution in the remote sensing field by using the DL technology. Such as super resolution of the remote sensing image based on DL, intelligent segmentation of the remote sensing image based on DL, high-speed target detection and tracking of the remote sensing image based on DL and the like. Remote sensing image reconstruction tasks have many studies at home and abroad. The Nikonorov Aretem professor team in 2018 utilizes the depth CNN to compensate the inherent distortion in the image captured by the hyper-spectrometer, and the quality of the image obtained by the hyper-spectrometer is improved. The ClaasGrohnfeldt group of the same year proposed a conditional generation countermeasure network (cGAN) architecture for fusing SAR and optical multi-spectral (MS) image data to generate cloud and fog-free MS optical data. The Lloyd h. hughes team, proposes a generation countermeasure network (AE-GAN) fused with an Automatic Encoder (AE) for SAR optical matching data generation to generate realistic SAR images. Meanwhile, the Liyun Song team of the university of Western electronic technology proposes a depth difference CNN model, and the model further enhances spatial information and retains spectral information while realizing high-resolution hyperspectral image reconstruction. A super-resolution (SR) reconstruction method of an SAR image based on SRGAN is proposed by Wanglong steel team of Beijing university in China for super-resolution reconstruction of the SAR image.
In contrast, the DL technology is not discussed in imaging, and professor Yazici, the american college of enhler science, introduces the DL technology into the field of radar imaging at the earliest, and realizes passive radar imaging based on DL. Meanwhile, the Qinyuiliang and other people rate of national defense science and technology university firstly applies DNN to radar imaging, and 5 layers of complex DNN are constructed by utilizing a complex full-connection layer, a complex convolution layer and a complex activation function layer, so that undersampled radar echo data imaging is realized. In 2019, a full convolution neural network is constructed, and the performance superior to that of a compressive sensing ISAR imaging method is achieved in the under-sampling ISAR data imaging quality and efficiency method.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of the prior art and provide an inverse synthetic aperture radar imaging method based on a generation countermeasure network.
The invention adopts the following technical scheme for solving the technical problems:
the invention provides an inverse synthetic aperture radar imaging method based on a generation countermeasure network, which comprises the following steps:
s1, constructing an ISAR (inverse synthetic aperture radar) data set of the training GAN;
s2, constructing a generator network of the GAN by utilizing the convolution layer, the deconvolution layer, the batch normalization BN layer, the activation function layer and the characteristic splicing and residual connecting strategies; constructing a GAN discriminator network by utilizing the convolution layer, the BN layer, the activation function layer, the flat layer and the full connection layer;
step S3, learning parameters of the GAN by combining a back propagation algorithm and an Adam algorithm after a loss function form is given based on the ISAR data set of the training GAN generated in the step S1; when the training loss of the GAN is small enough and tends to be stable, stopping updating the network parameters, and obtaining the GAN meeting the task requirement;
and step S4, realizing ISAR undersampled data imaging by utilizing a generator network of GAN.
As a further optimization scheme of the inverse synthetic aperture radar imaging method based on the generated countermeasure network, step S1 specifically includes the following steps: setting 256 range gates in the range direction of ISAR echo data, setting different echo pulse starting positions and pulse sampling intervals in the azimuth direction, and collecting 256 echo pulses to obtain an ISAR echo data matrix with the size of 256 multiplied by 256; on the basis, a plurality of groups of data are obtained through a strategy of randomly moving the distance of the data matrix;
in the GAN training process, updating the GAN parameters by combining a back propagation strategy and an Adam algorithm; randomly down-sampling the distance direction and the azimuth direction of each group of data and directly imaging to obtain an initial image; taking the initial image as input data of the GAN, and taking an imaging result which is obtained by each group of data through an RD algorithm and has good focusing and high image quality as expected output, namely as a target image; the primary image and the target image form a training sample; a plurality of training samples are constructed by the method, and the training samples are inverse synthetic aperture radar ISAR data sets for training the GAN.
As a further optimization scheme of the inverse synthetic aperture radar imaging method based on the generation countermeasure network, in step S2,
the generator network is used for extracting the optimal feature representation from the input initial image and reconstructing an imaging result, and the network consists of a contraction part and an expansion part; the contraction part utilizes the convolution layer and the residual error network module to extract characteristic data of the input primary image and carries out dimension reduction operation; the expansion part utilizes deconvolution to carry out feature representation reconstruction; in the expansion part, cascading the feature representations with the same size in the contraction and expansion processes, and extracting the features of the cascaded feature representations by using a residual error network module and convolution; adding a residual error learning mechanism in the last layer of the network, and finally summing the initial image and the characteristic data reconstructed by the network in the network to obtain a final ISAR imaging result;
the discriminator network is used for carrying out authenticity identification on the generated image output by the generator network of the GAN, namely judging whether the generated image is close to the target image or not; extracting characteristic data of the input sample by using a convolution and residual error network module, and performing dimension reduction operation; and finally, performing one-dimensional operation on the two-dimensional feature data through a Flatten layer, integrating all local features extracted by convolution through two fully-connected layers, and outputting a judgment result through a Sigmoid activation function.
As a further optimization scheme of the inverse synthetic aperture radar imaging method based on the generation countermeasure network, the loss function in the step S3 is divided into two parts, namely the loss function of the generator network and the loss function of the discriminator network;
the loss function of the generator network comprises an image generation loss function and a counterloss function based on the feature space; the image generation loss function is in the form of a mean square error loss functionMSEThe method is used for calculating a reconstruction error between the generated ISAR image and the label ISAR image, and is shown as a formula (1); function of penalty againstLSThe method is a least square loss function and is used for calculating the error between a label for generating image judgment and a real image label in a discriminator, and the formula (2) is shown; the two loss functions are multiplied by weighting coefficients respectively and then added to form a generator loss function LGAs shown in (3);
Figure BDA0002316680000000031
Figure BDA0002316680000000032
LG=0.5lMSE+0.5lLS (3)
wherein i represents the ith training sample, and n is the number of samples of one batch in the batch random gradient descent operation;
Figure BDA0002316680000000033
representing a primary image, σ, in the ith training sampleiRepresenting the target image of the ith sample, G (-) representing the generator network output of the GAN, D (-) representing the discriminator network output;
loss function L of discriminator networkDIs composed of two parts of least square loss function, as shown in (4); the first part is used for calculating the error of the label generated by the discriminator to image judgment and the false image label; the second part is used for judging the error between the label of the target image and the label of the real image:
Figure BDA0002316680000000034
updating a GAN parameter by adopting an Adam algorithm in a back propagation strategy; training a generator network and a discriminator network in an alternative updating mode, namely, fixing the network parameters of one party to be unchanged when updating the network parameters of the other party; and when the GAN training loss is small enough and stable, the GAN training is finished to obtain a network meeting the task requirement.
As a further optimization scheme of the inverse synthetic aperture radar imaging method based on the generated countermeasure network, in step S4, the generator network of GAN is used to realize ISAR undersampled data imaging, the ISAR data is randomly down-sampled at a sampling rate of 25% of the original sampling rate in the distance direction and the azimuth direction, and the two-dimensional randomly down-sampled ISAR echo data is imaged by a distance-doppler RD method to obtain a low-quality image, which is called an initial image; and taking the initial image as the input of the generator network of the trained GAN, and the output of the generator network is the final imaging result.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
(1) the invention provides a GAN to realize under-sampling ISAR data imaging, and the GAN provided by the invention has the following characteristics: firstly, introducing a residual error network block, thereby deepening the number of network layers to obtain richer characteristic information and simultaneously avoiding the problem of gradient disappearance caused by deepening the number of network layers; secondly, loss based on a feature space is added to the loss of the generator, and the generator is guided to better generate a reconstructed image close to a real sample; finally, a convolution with steps is used for replacing a pooling layer, so that the loss of spatial position information is reduced, and the probability of sparse gradient is reduced;
(2) learning parameters of each layer of a GAN generator and a discriminator by constructing a GAN with a plurality of hidden layers and generating a training data set containing a large number of ISAR target images of the same type; on the basis, ISAR imaging is realized by using a trained GAN generator; the learned GAN generator can establish the mapping relation between the input low-quality target primary image and the high-quality target image, so that the imaging network based on the GAN provided by the invention can reconstruct a high-quality ISAR image.
Drawings
Fig. 1 is an imaging schematic of GAN.
Fig. 2 is a diagram of a generator network architecture.
Fig. 3 is a diagram of a discriminator network structure.
Fig. 4 is a full data RD imaging result.
FIG. 5 is a result of different method imaging; wherein (a) is 25% data GAN imaging results, (b) is 25% data OMP imaging results, (c) is 25% data GKF imaging results, and (d) is 25% data null space L1 norm minimized imaging results.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
The present invention was carried out according to the procedure shown in FIG. 1. The method is divided into a training phase and an imaging phase.
In the training phase of GAN, a data set for training GAN is first constructed. When an ISAR data set is constructed, 256 range gates are set in the distance direction of ISAR echo data, different echo pulse starting positions and pulse sampling intervals are set in the direction of the ISAR echo data, 256 echo pulses are collected, and therefore a plurality of ISAR echo data matrixes are obtained, and the size of each ISAR echo data matrix is 256 multiplied by 256. And then, generating multiple groups of data by adopting a strategy of randomly moving the distance to the obtained ISAR echo data. And randomly sampling the distance direction and the azimuth direction of each group of data, and imaging by adopting an RD algorithm to obtain an initial image. And taking the focused image obtained by each group of data through the RD algorithm as a target image. A primary image and a target image form a training sample of the GAN, and 1600 training samples are constructed by the method.
The second is to construct the GAN and learn the parameters of the GAN through a training data set. The GAN generator network is constructed as shown in fig. 2 and the arbiter network is shown in fig. 3. The generator network is used for extracting the optimal feature representation from the input initial image, reducing the dimension and reconstructing the final imaging result through the low-dimensional feature data. The generator network consists of a contraction part for initially extracting the feature representation from the input and an expansion part for reconstructing the image using the feature representation. The discriminator network performs 'authenticity' discrimination on the generated image output by the GAN generator network, i.e. judges whether the generated image is close to the target image.
The construction of the GAN comprises two parts, a generator network and a discriminator network. The structure and function of the generator network and the arbiter network are specifically set forth as follows:
the generator network is used to extract the optimal feature representation from the input initial image and reconstruct the final imaging result. The generator network is divided into a contraction part and an expansion part.
The constriction has six stages. In the first stage, an input initial image is checked by a convolution kernel with the step size of 1 and the size of 3 multiplied by 3 to extract 64 characteristic data; in the second to fifth stages, each stage firstly utilizes a residual error network module to sequentially extract 64, 128, 256 and 512 pieces of feature data, and then performs the dimension reduction operation on the extracted feature representations. In the sixth stage, 1024 pieces of feature data are extracted by using a residual error network module.
The extension has five stages. In the first to fourth stages, each stage firstly utilizes a deconvolution reconstruction module to execute reconstruction feature data operation, 512, 256, 128 and 64 feature data are sequentially reconstructed, the obtained reconstruction feature data are spliced with the feature data extracted in each stage in the contraction part (feature splicing objects in each stage are respectively feature data obtained in the fifth, fourth, third and second stages in the contraction part), and 512, 256, 128 and 64 feature data are sequentially extracted from the spliced feature data by utilizing a residual error network module to serve as reconstruction objects in the next stage; in the fifth stage, feature extraction is carried out by using a convolution layer with the step length of 1 and the convolution kernel size of 1 multiplied by 1, and the convolution layer and the initial image are summed point by point to obtain output.
In the generator network, a residual error network module consists of three convolutional layers with the step length of 1 and the convolutional kernel size of 3 multiplied by 3, and dimension reduction operation is completed by adopting one convolutional layer with the step length of 2 and the convolutional kernel size of 3 multiplied by 3; the deconvolution reconstruction module consists of a deconvolution layer with a step size of 2 and a convolution kernel size of 3 x 3, and executes BN operation after deconvolution, using ReLU as an activation function. BN operations are performed in the generator network except for the last layer of convolution, using ReLU as the activation function. The last convolutional layer uses Tanh as the activation function.
The discriminator network carries out 'true and false' identification on the generated image output by the GAN generator network, namely, the error between the generated image output by the generator network and the target image is measured. The arbiter network is completed in six stages. In the first stage, 64 pieces of feature data are extracted from an input generated image by using a convolution layer with the step size of 1 and the convolution kernel size of 3 multiplied by 3; in the second to fifth stages, each stage firstly utilizes a residual error network module to sequentially extract 64, 128, 256 and 512 pieces of feature data from the previous layer of output, and then performs the operation of reducing the dimension on the extracted feature representation; and in the sixth stage, 1024 pieces of feature data are extracted by using a residual error network module, then the feature data are subjected to one-dimensional processing through a Flatten layer, feature synthesis is carried out through two layers of full-connection layers and then the feature data are used as the input of a Sigmoid activation function, and the output of the Sigmoid is a judgment result.
In the discriminator network, the residual error network module is composed of two convolution layers with the step length of 1 and the convolution kernel size of 3 multiplied by 3; dimension reduction is completed by a convolution operation with a step size of 2 and a convolution kernel size of 3 x 3. In the discriminator network, adding a BN layer after each convolution, and simultaneously using LeakyReLU as an activation function; the first fully-connected layer uses LeakyReLU as the activation function, and the second fully-connected layer uses Sigmoid as the activation function.
And then designing a loss function of the GAN, and updating the weights of the GAN neurons by combining a back propagation algorithm and an Adam optimization algorithm. The training process of the generator network and the discriminator network adopts an alternate updating mode, namely, when one parameter is updated, the other parameter is fixed. And updating the network parameters of the generator once when the network parameters of the arbiter are updated three times. And when the training loss of the GAN is small enough and stable, the GAN training is finished to obtain a network meeting the task requirement.
In the imaging stage, the generator network in the GAN is extracted for imaging. And (3) taking an initial image generated by ISAR data with the sampling rate reduced to 25% as the input of the generator network after training, and outputting the final imaging result by the generator network. The final imaging result is shown in fig. 5 (a).
Examples of the embodiments
Fig. 4 shows the result of imaging the ISAR full data using the RD method.
Selecting new ISAR echo data different from the training set, performing 25% down-sampling, and performing imaging by using the trained generator network, wherein the result is shown in (a) in FIG. 5.
In order to verify the effectiveness of the imaging method, the imaging result of the GAN is compared with the image reconstruction results of Orthogonal Matching Pursuit (OMP), Null-Space L1 Norm Minimization (Null-Space L1 Norm Minimization), Greedy Kalman Filtering (GKF). The imaging results of these methods are shown in fig. 5 (b) to (d).
Comparing fig. 4 and (a) of fig. 5, it can be seen that GAN uses 25% data to obtain imaging results that are very close to the imaging results of full data by the RD method. Comparing fig. 5 (a) - (d), it can be seen that there are fewer stray points in the background in the imaging result of GAN, and the main body of the airplane can be clearly identified. In fig. 5 (b) - (d), the OMP, GKF null space L1 norm minimization method is not able to reconstruct the body part of the aircraft completely clearly, and is accompanied by strong false scatter point interference.
The image evaluation function was used to evaluate fig. 5, and the calculation time of each method was counted, with the results shown in table 1.
The image evaluation function includes a "true value" based image evaluation function and a conventional image evaluation function. The evaluation indexes based on the "true value" specifically include: false Alarm (FA), Missed Detection (MD), and Relative Root Mean Square Error (RRMSE). FA is used for evaluating the number of scattering points reconstructed in error, MD is used for evaluating correct scattering points which are not reconstructed, and RRMSE is used for evaluating the reconstruction errors of the amplitudes of the scattering points. Because there is no group truth image, the RD image with good focus and high quality obtained by the full data is used as the 'true value' image in the experiment, and the actual measurement is the quality evaluation of all the methods relative to the RD imaging result. Conventional imaging quality assessment indicators include: target-to-chopper Ratio (TCR), Image Entropy (ENT), and Image Contrast (IC).
As can be seen from table 1, the FA value and MD value of GAN imaging are minimum, which means that the number of erroneously reconstructed scattering points and the number of non-reconstructed scattering points in the GAN imaging result are minimum when the full data RD well-focused image is taken as a reference. This is consistent with the comparison between (a) in fig. 5 and (b) in fig. 5 to (d) in fig. 5. Further, note that the null space L1 min algorithm maximizes MD values due to emphasizing sparse property reconstruction. And continuously comparing the RRMSE indexes to find that the image RRMSE of the GAN is the minimum, which indicates that the amplitude reconstruction error of the scattering point is the minimum. Compared with other methods, the TCR of the GAN imaging result is obviously higher than the imaging results of the OMP and GKF methods, which shows that the contrast of the target to the background clutter is strong, the background suppression is more sufficient, the image entropy value is small, and the contrast is large.
The calculation time of each method is shown in the last column of table 1, once the network is trained, the GAN imaging time can reach 7 seconds, and the efficiency is obviously higher than that of other methods.
Table 1.quantitative evaluation of imaging results of 25% undersampled data under different imaging methods
Method FA MD RRMSE TCR(dB) ENT IC Times(s)
GAN 3 63 0.1944 80.5971 4.3622 10.4814 7.34717
OMP 91 83 0.3146 49.1832 4.9304 7.9220 36.0520
GKF 51 93 0.2567 54.515 4.6464 9.0974 236.4908
Null space L1 norm minimum 19 116 0.2534 63.7124 4.2482 11.3440 523.5010
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (4)

1. An inverse synthetic aperture radar imaging method based on a generative countermeasure network is characterized by comprising the following steps:
s1, constructing an ISAR (inverse synthetic aperture radar) data set of the training GAN;
s2, constructing a generator network of the GAN by utilizing the convolution layer, the deconvolution layer, the batch normalization BN layer, the activation function layer and the characteristic splicing and residual connecting strategies; constructing a GAN discriminator network by utilizing the convolution layer, the BN layer, the activation function layer, the flat layer and the full connection layer;
step S3, learning parameters of the GAN by combining a back propagation algorithm and an Adam algorithm after a loss function form is given based on the ISAR data set of the training GAN generated in the step S1; when the training loss of the GAN is small enough and tends to be stable, stopping updating the network parameters, and obtaining the GAN meeting the task requirement;
step S4, realizing ISAR undersampled data imaging by utilizing a generator network of GAN;
in the step S2, in the step S,
the generator network is used for extracting the optimal feature representation from the input initial image and reconstructing an imaging result, and the network consists of a contraction part and an expansion part; the contraction part utilizes the convolution layer and the residual error network module to extract characteristic data of the input primary image and carries out dimension reduction operation; the expansion part utilizes deconvolution to carry out feature representation reconstruction; in the expansion part, cascading the feature representations with the same size in the contraction and expansion processes, and extracting the features of the cascaded feature representations by using a residual error network module and convolution; adding a residual error learning mechanism in the last layer of the network, and finally summing the initial image and the characteristic data reconstructed by the network in the network to obtain a final ISAR imaging result;
the discriminator network is used for carrying out authenticity identification on the generated image output by the generator network of the GAN, namely judging whether the generated image is close to the target image or not; extracting characteristic data of the input sample by using a convolution and residual error network module, and performing dimension reduction operation; and finally, performing one-dimensional operation on the two-dimensional feature data through a Flatten layer, integrating all local features extracted by convolution through two fully-connected layers, and outputting a judgment result through a Sigmoid activation function.
2. The inverse synthetic aperture radar imaging method based on the generative countermeasure network of claim 1, wherein the step S1 is as follows: setting 256 range gates in the range direction of ISAR echo data, setting different echo pulse starting positions and pulse sampling intervals in the azimuth direction, and collecting 256 echo pulses to obtain an ISAR echo data matrix with the size of 256 multiplied by 256; on the basis, a plurality of groups of data are obtained through a strategy of randomly moving the distance of the data matrix;
in the GAN training process, updating the GAN parameters by combining a back propagation strategy and an Adam algorithm; randomly down-sampling the distance direction and the azimuth direction of each group of data and directly imaging to obtain an initial image; taking the initial image as input data of the GAN, and taking an imaging result which is obtained by each group of data through an RD algorithm and has good focusing and high image quality as expected output, namely as a target image; the primary image and the target image form a training sample; a plurality of training samples are constructed by the method, and the training samples are inverse synthetic aperture radar ISAR data sets for training the GAN.
3. The inverse synthetic aperture radar imaging method based on the generative countermeasure network as claimed in claim 1, wherein the loss function in step S3 is divided into two parts, namely a loss function of the generator network and a loss function of the discriminator network;
the loss function of the generator network comprises an image generation loss function and a counterloss function based on the feature space; the image generation loss function is in the form of a mean square error loss functionMSEThe method is used for calculating a reconstruction error between the generated ISAR image and the label ISAR image, and is shown as a formula (1); function of penalty againstLSThe method is a least square loss function and is used for calculating the error between a label for generating image judgment and a real image label in a discriminator, and the formula (2) is shown; the two loss functions are multiplied by weighting coefficients respectively and then added to form a generator loss function LGAs shown in formula (3);
Figure FDA0003287145030000021
Figure FDA0003287145030000022
LG=0.5lMSE+0.5lLS (3)
wherein i represents the ith training sample, and n is the number of samples of one batch in the batch random gradient descent operation;
Figure FDA0003287145030000024
representing a primary image, σ, in the ith training sampleiRepresenting the target image of the ith sample, G (-) representing the generator network output of the GAN, D (-) representing the discriminator network output;
loss function L of discriminator networkDThe method is characterized by comprising two parts of least square loss functions as shown in a formula (4); the first part is used for calculating the error of the label generated by the discriminator to image judgment and the false image label; the second part is used for judging the error between the label of the target image and the label of the real image:
Figure FDA0003287145030000023
updating a GAN parameter by adopting an Adam algorithm in a back propagation strategy; training a generator network and a discriminator network in an alternative updating mode, namely, fixing the network parameters of one party to be unchanged when updating the network parameters of the other party; and when the GAN training loss is small enough and stable, the GAN training is finished to obtain a network meeting the task requirement.
4. The inverse synthetic aperture radar imaging method based on the generated countermeasure network of claim 1, wherein in step S4, the generator network of GAN is used to realize ISAR undersampled data imaging, which is to perform random downsampling of ISAR data at a sampling rate of 25% of the original sampling rate in the distance direction and the azimuth direction, and to image the two-dimensional random downsampled ISAR echo data by using the range-doppler RD method to obtain a low-quality image, which is called a primary image; and taking the initial image as the input of the generator network of the trained GAN, and the output of the generator network is the final imaging result.
CN201911280745.7A 2019-12-13 2019-12-13 Inverse synthetic aperture radar imaging method based on generation countermeasure network Active CN111077523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911280745.7A CN111077523B (en) 2019-12-13 2019-12-13 Inverse synthetic aperture radar imaging method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911280745.7A CN111077523B (en) 2019-12-13 2019-12-13 Inverse synthetic aperture radar imaging method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN111077523A CN111077523A (en) 2020-04-28
CN111077523B true CN111077523B (en) 2021-12-21

Family

ID=70314301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911280745.7A Active CN111077523B (en) 2019-12-13 2019-12-13 Inverse synthetic aperture radar imaging method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN111077523B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667431B (en) * 2020-06-09 2023-04-14 云南电网有限责任公司电力科学研究院 Method and device for manufacturing cloud and fog removing training set based on image conversion
CN112001122B (en) * 2020-08-26 2023-09-26 合肥工业大学 Non-contact physiological signal measurement method based on end-to-end generation countermeasure network
CN111999731B (en) * 2020-08-26 2022-03-22 合肥工业大学 Electromagnetic backscattering imaging method based on perception generation countermeasure network
CN112560596B (en) * 2020-12-01 2023-09-19 中国航天科工集团第二研究院 Radar interference category identification method and system
CN112419203B (en) * 2020-12-07 2023-07-25 贵州大学 Diffusion weighted image compressed sensing recovery method and device based on countermeasure network
CN112731327B (en) * 2020-12-25 2023-05-23 南昌航空大学 HRRP radar target identification method based on CN-LSGAN, STFT and CNN
CN113052925A (en) * 2021-04-02 2021-06-29 广东工业大学 Compressed sensing reconstruction method and system based on deep learning
CN113205521A (en) * 2021-04-23 2021-08-03 复旦大学 Image segmentation method of medical image data
CN113378472B (en) * 2021-06-23 2022-09-13 合肥工业大学 Mixed boundary electromagnetic backscattering imaging method based on generation countermeasure network
CN113902947B (en) * 2021-10-09 2023-08-25 南京航空航天大学 Method for constructing air target infrared image generation type countermeasure network by natural image
CN114442092B (en) * 2021-12-31 2024-04-12 北京理工大学 SAR deep learning three-dimensional imaging method for distributed unmanned aerial vehicle
CN114720984B (en) * 2022-03-08 2023-04-25 电子科技大学 SAR imaging method oriented to sparse sampling and inaccurate observation
CN114609631B (en) * 2022-03-08 2023-12-22 电子科技大学 Synthetic aperture radar undersampling imaging method based on generation countermeasure network
CN115760603A (en) * 2022-11-08 2023-03-07 贵州大学 Interference array broadband imaging method based on big data technology

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229348A (en) * 2017-12-21 2018-06-29 中国科学院自动化研究所 Block the identification device of facial image
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing
CN108460391A (en) * 2018-03-09 2018-08-28 西安电子科技大学 Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition
CN108872988A (en) * 2018-07-12 2018-11-23 南京航空航天大学 A kind of inverse synthetic aperture radar imaging method based on convolutional neural networks
CN109377535A (en) * 2018-10-24 2019-02-22 电子科技大学 Facial attribute automatic edition system, method, storage medium and terminal
CN110414372A (en) * 2019-07-08 2019-11-05 北京亮亮视野科技有限公司 Method for detecting human face, device and the electronic equipment of enhancing
CN110568442A (en) * 2019-10-15 2019-12-13 中国人民解放军国防科技大学 Radar echo extrapolation method based on confrontation extrapolation neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229348A (en) * 2017-12-21 2018-06-29 中国科学院自动化研究所 Block the identification device of facial image
CN108460391A (en) * 2018-03-09 2018-08-28 西安电子科技大学 Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing
CN108872988A (en) * 2018-07-12 2018-11-23 南京航空航天大学 A kind of inverse synthetic aperture radar imaging method based on convolutional neural networks
CN109377535A (en) * 2018-10-24 2019-02-22 电子科技大学 Facial attribute automatic edition system, method, storage medium and terminal
CN110414372A (en) * 2019-07-08 2019-11-05 北京亮亮视野科技有限公司 Method for detecting human face, device and the electronic equipment of enhancing
CN110568442A (en) * 2019-10-15 2019-12-13 中国人民解放军国防科技大学 Radar echo extrapolation method based on confrontation extrapolation neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
High Resolution SAR Image Synthesis with Hierarchical Generative Adversarial Networks;Henghua Huang et al.;《IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium》;20190802;第2782-2785页 *
ISAR Target Recognition Using Pix2pix Network Derived from cGAN;Gaopeng Li et al.;《2019 International Radar Conference (RADAR)》;20190927;第1-4页 *
融合深度学习和凸优化迭代求解策略的逆合成孔径雷达成像方法;李泽 等;《中国图象图形学报》;20191130;第24卷(第11期);第2045-2056页 *

Also Published As

Publication number Publication date
CN111077523A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111077523B (en) Inverse synthetic aperture radar imaging method based on generation countermeasure network
CN108872988B (en) Inverse synthetic aperture radar imaging method based on convolutional neural network
CN107239751B (en) High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
CN109683161A (en) A method of the inverse synthetic aperture radar imaging based on depth ADMM network
Feng et al. Electromagnetic scattering feature (ESF) module embedded network based on ASC model for robust and interpretable SAR ATR
Qin et al. Enhancing ISAR resolution by a generative adversarial network
Dong et al. A multiscale self-attention deep clustering for change detection in SAR images
CN112990334A (en) Small sample SAR image target identification method based on improved prototype network
Lazarov et al. ISAR geometry, signal model, and image processing algorithms
Mukherjee et al. An unsupervised generative neural approach for InSAR phase filtering and coherence estimation
CN114692509B (en) Strong noise single photon three-dimensional reconstruction method based on multi-stage degeneration neural network
Bai et al. Feature enhancement pyramid and shallow feature reconstruction network for SAR ship detection
CN111563528B (en) SAR image classification method based on multi-scale feature learning network and bilateral filtering
CN113111975A (en) SAR image target classification method based on multi-kernel scale convolutional neural network
Yu et al. PDNet: A lightweight deep convolutional neural network for InSAR phase denoising
Deng et al. Amplitude-phase CNN-based SAR target classification via complex-valued sparse image
Zhang et al. Complex-valued graph neural network on space target classification for defocused ISAR images
CN113111706A (en) SAR target feature unwrapping and identifying method for continuous missing of azimuth angle
CN105204010A (en) Ground object target detection method of low signal-to-clutter ratio synthetic aperture radar image
CN115909086A (en) SAR target detection and identification method based on multistage enhanced network
Qu et al. Enhanced through-the-wall radar imaging based on deep layer aggregation
Kang et al. Two Dimensional Spectral Representation
CN111624606A (en) Radar image rainfall identification method
CN111967292A (en) Lightweight SAR image ship detection method
CN116524358B (en) SAR data set amplification method for target recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant