CN113985408A - Inverse synthetic aperture radar imaging method combining gate unit and transfer learning - Google Patents

Inverse synthetic aperture radar imaging method combining gate unit and transfer learning Download PDF

Info

Publication number
CN113985408A
CN113985408A CN202111067939.6A CN202111067939A CN113985408A CN 113985408 A CN113985408 A CN 113985408A CN 202111067939 A CN202111067939 A CN 202111067939A CN 113985408 A CN113985408 A CN 113985408A
Authority
CN
China
Prior art keywords
fcnn
isar
layer
data
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111067939.6A
Other languages
Chinese (zh)
Other versions
CN113985408B (en
Inventor
汪玲
胡长雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202111067939.6A priority Critical patent/CN113985408B/en
Publication of CN113985408A publication Critical patent/CN113985408A/en
Application granted granted Critical
Publication of CN113985408B publication Critical patent/CN113985408B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes
    • G01S13/9064Inverse SAR [ISAR]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9094Theoretical aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an inverse synthetic aperture radar imaging method combining a gate unit and transfer learning. And secondly, further introducing a gate unit into the FCNN to form the G-FCNN, and ensuring that the performance of the G-FCNN reaches the optimum by adopting a migration learning strategy TL. And then, constructing large-scale radar training data for pre-training the G-FCNN by using electromagnetic simulation software, and then finely adjusting network layer parameters in the pre-trained G-FCNN by using small-scale actual measurement radar data to obtain optimal network parameters for a target-oriented imaging task. The invention is superior to the existing ISAR imaging method based on the convolutional neural network.

Description

Inverse synthetic aperture radar imaging method combining gate unit and transfer learning
Technical Field
The invention belongs to the technical field of radar signal processing, and relates to a method for sparse ISAR imaging.
Background
ISAR (Inverse synthetic aperture radar) can obtain high-resolution images of moving targets under all-weather and all-time conditions, and is an important tool for monitoring and identifying non-cooperative targets. Conventional RD (Range doppler) methods use FFT to achieve target azimuthal imaging. The RD imaging method has high imaging efficiency, but the imaging result is easily interfered by side lobes. The sparse ISAR imaging method can reconstruct a target image with small side lobe interference and high image contrast by using few measured values. However, the imaging quality and efficiency of sparse ISAR imaging methods are limited by the inaccuracy of sparse representation and the time consuming iterative reconstruction process, respectively.
The well-learned CNNs (convolutional neural networks) can automatically find the optimal feature representation of unknown radar data. However, these CNNs have only a single feature forward delivery path, and lack a path for delivering OFRs (Original feature representation) directly to the network reconstruction layer. Since OFRs are effective for target detail reconstruction, they are not fully utilized in these CNNs, which limits the quality of target detail reconstruction. The FCNN (full convolutional neural network) introduces SKs (Skip connections) to establish additional paths, and directly transfers OFRs to the network reconstruction layer. However, OFRs delivered by SKs inevitably contain characteristic information of false scatter points, which makes the final reconstructed target image prone to false scatter points.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the defects of the background art, GU (Gate Unit) is introduced into FCNN to form an improved FCNN called G-FCNN, a TL (Transfer learning) strategy is applied to the G-FCNN for training, based on the trained G-FCNN, an ISAR sparse imaging method based on the G-FCNN is provided, low-quality ISAR target initial images are input, and imaging results with better focus are output. .
The invention adopts the following technical scheme for solving the technical problems:
the invention provides an inverse synthetic aperture radar imaging method combining a gate unit and transfer learning, which comprises the following steps of:
s1, constructing a Full Convolution Neural Network (FCNN): constructing an FCNN of inverse synthetic aperture radar ISAR down-sampling data imaging by using a convolution layer, a maximized pooling layer, a BN layer, an activation function layer, an inverse convolution layer and a jump connection SK link;
s2, constructing G-FCNN: introducing a gate unit GU on the basis of the FCNN to construct G-FCNN; the G-FCNN inputs an initial image obtained by ISAR downsampling data through two-dimensional Fast Fourier Transform (FFT), and outputs a final ISAR imaging result;
s3, constructing an electromagnetic simulation environment of a radar irradiation target by using electromagnetic simulation software, setting corresponding radar parameters, calculating to obtain a simulated radar echo, and constructing a large-scale ISAR simulation training data set;
s4, pre-training the G-FCNN by combining a back propagation and gradient descent algorithm after a loss function form is given based on the large-scale ISAR simulation training data set generated in the step S3;
s5, fine tuning parameters in the G-FCNN pre-trained in the step S4 by utilizing a small-scale ISAR actual measurement training data set to obtain the G-FCNN with optimal parameters;
s6, imaging of unknown ISAR down-sampling data is achieved through the G-FCNN, an initial image obtained through two-dimensional FFT of the ISAR down-sampling data is used as the input of the fine-tuned G-FCNN, and the output of the G-FCNN is the final imaging result.
Further, in the inverse synthetic aperture radar imaging method combining the gate unit and the transfer learning, an FCNN is constructed in step S1, and the method specifically includes:
firstly, extracting a plurality of feature data by using a convolution kernel of 3 multiplied by 3, reducing the deviation of the feature data and input data by using a BN layer and an activation function layer, and reducing the dimension of the feature data by using a maximized pooling layer with an operation kernel of 2 multiplied by 2;
secondly, in the characteristic data reconstruction process, reconstructing the characteristic data by using a deconvolution layer, wherein the size of a deconvolution kernel is 2 multiplied by 2, and simultaneously reducing errors between the reconstructed characteristic data and the dimension-reduced characteristic data by using a BN layer and a ReLU activation function;
the FCNN has a three-level structure, each level adopts SK to establish mapping of a network shallow layer and a network deep layer, SK is added to the last layer of the network, and the initial image and feature data reconstructed by the network are summed.
Further, in the inverse synthetic aperture radar imaging method combining the gate unit and the transfer learning, the G-FCNN is constructed in step S2, which is specifically as follows:
on the basis of FCNN, a 1 × 1 convolutional layer is adopted to realize a gate unit mechanism:
Figure BDA0003259282900000021
wherein, BsThe network shallow feature representing cascade in the FPs layer represents OFRs, wherein s is 1,2 and 3 represent series,
Figure BDA0003259282900000022
representing a convolution kernel of a 1 x 1 gate function,
Figure BDA0003259282900000023
a weight parameter representing the convolution operation,
Figure BDA0003259282900000024
indicating deviation, weight parameter
Figure BDA0003259282900000025
The amount of OFRs that can be passed further forward in stage s is controlled,
Figure BDA0003259282900000026
the OFRs that pass through the gate cells in s-stage are shown.
Further, the inverse synthetic aperture radar imaging method combining the gate unit and the transfer learning provided by the present invention constructs a large scale ISAR simulation training data set in step S3, specifically as follows:
a three-dimensional model similar to an actual measurement Yak-42 target is designed by using the CADFEKO in the electromagnetic simulation software FEKO, the scale of the simulation model is consistent with that of the actual measurement target, the material of the simulation target is set as an ideal electric conductor,
in FEKO, a simulation model is placed in a global coordinate system OXYZ, radar parameters are set, a grid is divided on the surface of the simulation model, a target scattered echo calculation method is set, and simulated radar echoes are calculated.
Furthermore, the inverse synthetic aperture radar imaging method combining the gate unit and the transfer learning provided by the invention utilizes two modes to calculate the received echo:
the first mode is that two-dimensional FFT is directly carried out on the reflected echo to obtain an image of a simulation target, and the target image in the first mode is called a label image;
the second mode is that the reflection echo is randomly down-sampled in the distance direction and the azimuth direction, two-dimensional FFT is carried out on the down-sampled data to obtain an image of a simulation target, and the target image obtained in the mode is called as an initial image;
and a training sample in the ISAR simulation training data set is formed by a pair of initial images and a pair of label images, and all simulation training samples are divided into a training set and a verification set for pre-training the G-FCNN.
Furthermore, in the inverse synthetic aperture radar imaging method combining the gate unit and the transfer learning, the minimum mean square error function is used as a loss function in step S4, and after the loss function is set, the G-FCNN parameters are updated by combining a batch gradient descent and back propagation algorithm, so that the G-FCNN is pre-trained.
Further, in the inverse synthetic aperture radar imaging method combining the gate unit and the transfer learning, step S4 is performed by using a minimum mean square error function as a loss function, specifically:
Figure BDA0003259282900000031
wherein i represents the ith training sample,
Figure BDA0003259282900000032
is the result of the reconstruction output by the network,
Figure BDA0003259282900000033
represents a primitive image in the ith training sample, { W } represents a set of network parameters, f (-) represents a function of the CNN network description, σiThe labeled image of the ith sample is shown, and L ({ W }) shows the reconstructed mean square error.
Further, in the inverse synthetic aperture radar imaging method combining the gate unit and the transfer learning, in step S5, the small-scale ISAR measured training data set is a set formed by ISAR measured target images, and each training sample in the measured training data set also includes a pair of ISAR measured data initial images and a pair of label images;
and (4) combining the small-scale ISAR actual measurement training data sets with the minimum mean square error function of the step S4, and finely adjusting the pre-trained G-FCNN to obtain the optimal G-FCNN facing the unknown ISAR target imaging task.
Further, in the inverse synthetic aperture radar imaging method combining the gate unit and the transfer learning, in step S6, G-FCNN is used to realize target data imaging actually measured by ISAR, random down-sampling in distance direction and azimuth direction is performed on the motion compensated ISAR data, an initial image is obtained after two-dimensional FFT of the down-sampled data, the initial image is used as an input of the fine-tuned G-FCNN, and the output of the G-FCNN is a final imaging result.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
according to the invention, a gate unit GU is introduced into the FCNN to form the G-FCNN. GUs have the ability to learn, and can strengthen the effective feature representation, weaken the ineffective feature representation, and further autonomously decide which OFRs can be further forward transmitted.
Meanwhile, the TL strategy is applied to the G-FCNN. Firstly, pre-training the G-FCNN by utilizing a large-scale ISAR simulation training data set, and then finely tuning the G-FCNN by utilizing an existing small-scale ISAR actual measurement training data set. The simulated ISAR training dataset includes sufficient general features of the simulated target (e.g., structural features of the simulated target, etc.), such that the G-FCNN may learn the unique features of the measured radar target (e.g., detailed features of the measured target, etc.) from the measured ISAR training dataset. Based on the trained G-FCNN, the G-FCNN inputs a low-quality ISAR target initial image and correspondingly outputs a better-focused imaging result.
The measured ISAR data imaging result shows that the G-FCNN-based imaging method provided by the invention is superior to the current optimal ISAR imaging method based on CNN in the aspects of image quality and quantitative evaluation.
Drawings
FIG. 1 is a diagram of a G-FCNN network architecture for ISAR sparse imaging.
FIG. 2 is a diagram of a Yak-42 model of an airplane simulated by Yak-42 and FEKO.
FIG. 3 is a tag image of the Yak-42 model with a portion of a large-scale radar simulation dataset.
FIG. 4 is the results of the aircraft data 1 imaged by the RD method and the G-FCNN, CNN, and FCNN.
FIG. 5 is the result of the aircraft data 2 being imaged by the RD method and the G-FCNN, CNN, and FCNN.
Detailed Description
The technical solution of the present invention is further described in detail below with reference to the accompanying drawings.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
According to the invention, GU (Gate Unit) is introduced into the FCNN to form an improved FCNN which is called G-FCNN. The gate unit has the learning ability, and can strengthen the effective characteristic representation, weaken the ineffective characteristic representation, and further autonomously decide which OFRs can be further transmitted in the forward direction.
The G-FCNN not only has multi-stage decomposition and multi-channel filtering of the FCNN, but also introduces gate units into the G-FCNN, so the architecture of the G-FCNN is more complex than that of the FCNN. The existing small-scale ISAR actual measurement training data set has limited training samples, and the parameters in the G-FCNN cannot be guaranteed to be optimal. Therefore, the present invention applies TL (Transfer learning) strategy to G-FCNN. Firstly, pre-training the G-FCNN by utilizing a large-scale ISAR simulation training data set, and then finely tuning the G-FCNN by utilizing an existing small-scale ISAR actual measurement training data set. The simulated ISAR training dataset includes sufficient general features of the simulated target (e.g., structural features of the simulated target, etc.), such that the G-FCNN may learn the unique features of the measured radar target (e.g., detailed features of the measured target, etc.) from the measured ISAR training dataset. Based on the trained G-FCNN, the G-FCNN inputs a low-quality ISAR target initial image and correspondingly outputs a better-focused imaging result.
The invention provides an inverse synthetic aperture radar imaging method combining a gate unit and transfer learning, wherein the gate unit is introduced on the basis of an FCNN architecture to construct a G-FCNN; and meanwhile, the TL is utilized to obtain the optimal G-FCNN for the imaging target task. The method specifically comprises the following steps:
step 1, constructing FCNN.
The FCNN is constructed by using a convolutional layer, a maximization pooling layer, a BN layer, an activation function layer, a deconvolution layer, SKs and the like. The specific operation is as follows: first, a plurality of feature data are extracted using a convolution kernel of 3 × 3, and the deviation of the feature data from the input data is reduced using the BN layer and the ReLU function. And reducing the dimension of the feature data by utilizing a maximum pooling layer with an operation core of 2 multiplied by 2. In the characteristic data reconstruction process, the characteristic data is reconstructed by using a deconvolution layer, and the size of a deconvolution kernel is 2 multiplied by 2. And simultaneously, the BN layer and the ReLU activation function are utilized to reduce the error of the reconstructed feature data and the dimension-reduced feature data. The FCNN has a three-level structure, and each level adopts SK to establish mapping of a network shallow layer and a network deep layer. And adding SK in the last layer of the network, and summing the initial image with the characteristic data reconstructed by the network.
And 2, constructing the G-FCNN.
And introducing a gate unit on the basis of the FCNN to construct the G-FCNN. And the G-FCNN inputs an initial image obtained by the down-sampled ISAR echo data and outputs a final ISAR imaging result. The gate units can be subjected to adaptive learning, and the learned gate units can perform weighting processing on the OFRs transmitted in the forward direction through the SK, so that useful feature representation is strengthened, and invalid feature representation is weakened. The gate unit mechanism is realized by adopting a 1 × 1 convolutional layer:
Figure BDA0003259282900000051
wherein B issRepresent OFRs cascaded in FPs layers, where s ═ 1,2,3 represents the number of stages.
Figure BDA0003259282900000052
Representing a convolution kernel of a 1 x 1 gate function,
Figure BDA0003259282900000053
the parameters that represent the operation of the convolution are,
Figure BDA0003259282900000054
the deviation is indicated. Weight parameter in gate unit
Figure BDA0003259282900000055
Controlling how many OFRs can be passed further forward in stage s.
Figure BDA0003259282900000056
The OFRs that pass through the gate cells in s-stage are shown.
The G-FCNN constructed by the invention is shown in figure 1, and the concrete structure is as follows:
the first layer is convolutional layer C1, 64 filters of 3 × 3 in C1 perform convolution operation with step length of 1, the convolutional layer output is normalized through one BN layer after the convolution is finished, and then the output of the BN layer is nonlinearly activated by utilizing a ReLU function;
the second layer is convolutional layer C2, 64 filters of 3 × 3 in C2 perform convolution operation with step length of 1, the convolutional layer output is normalized through one BN layer after the convolution is finished, and then the output of the BN layer is nonlinearly activated by utilizing a ReLU function;
the third layer is a pooling layer P1, the maximum pooling operation is used in the pooling process, the operation core of the P1 layer is 2 multiplied by 2, and the step length is 2;
the fourth layer is convolutional layer C3, 128 filters of 3 × 3 in C3 perform convolution operation with step length of 1, the convolutional layer output is normalized through one BN layer after the convolution is finished, and then the output of the BN layer is nonlinearly activated by utilizing a ReLU function;
the fifth layer is a pooling layer P2, the maximum pooling operation is used in the pooling process, the operation core of the pooling layer is 2 x 2, and the step length is 2;
the sixth layer is convolutional layer C4, 256 filters of 3 × 3 in C4 perform convolution operation with step length of 1, the convolutional layer output is normalized through one BN layer after the convolution is finished, and then the output of the BN layer is nonlinearly activated by utilizing a ReLU function;
the seventh layer is a convolutional layer FP3, 256 filters with the length of 3 multiplied by 3 in FP3 are subjected to convolution operation with the step size of 1, the convolutional layer output is normalized through a BN layer after the convolution is finished, and then the output of the BN layer is subjected to nonlinear activation by utilizing a ReLU function; in addition, the SK transfers the output of the P2 layer to the FP3 layer, and after being cascaded with 256 feature representations, the FP3 has 512 feature representations in total;
the eighth layer is a gate cell layer C5, C5 has 512 convolution kernels of 1 × 1 in total;
the ninth layer is a deconvolution layer D1, 128 filters of 2 × 2 in the D1 perform deconvolution operation with the step length of 1, the output of the deconvolution layer is normalized by a BN layer after the deconvolution is finished, and then the output of the BN layer is nonlinearly activated by utilizing a ReLU function;
the tenth layer is an FP2 layer, 128 filters with the length of 3 x 3 in the FP2 layer are subjected to convolution operation with the step length of 1, the convolution layer output is normalized through a BN layer after the convolution is finished, and then the output of the BN layer is subjected to nonlinear activation by utilizing a ReLU function; in addition, the SKs transfers the outputs of the P1 layer and the C3 layer to the FP2 layer, and after cascading with 128 feature representations therein, 384 feature representations are shared in the FP 2;
the eleventh layer is a gate cell layer C6, C6 has 384 convolution kernels of 1 × 1 in total;
the twelfth layer is a deconvolution layer D2, 64 filters of 2 × 2 in the D2 perform deconvolution operation with the step length of 1, the output of the deconvolution layer is normalized by a BN layer after the deconvolution is finished, and then the output of the BN layer is nonlinearly activated by utilizing a ReLU function;
a thirteenth layer is an FP1 layer, 64 filters with the length of 3 multiplied by 3 in the FP1 are subjected to deconvolution operation with the step length of 1, the deconvolution layer is normalized through a BN layer to output after the deconvolution is finished, and then the output of the BN layer is nonlinearly activated by utilizing a ReLU function; in addition, the SKs transfers the outputs of the C1 layer and the C2 layer to the FP1 layer, and after cascading with 64 feature representations, 192 feature representations are shared in the FP 1;
the fourteenth layer is the gate cell layer C7, C7 has 192 convolution kernels of 1 × 1 in total;
a fifteenth layer is convolutional layer C8, wherein 1 filter of 3 × 3 in C8 performs convolution operation with step length of 1, outputs of the convolutional layer are normalized by one BN layer after convolution is finished, and then nonlinear activation is performed on outputs of the BN layer by utilizing a ReLU function;
the sixteenth layer is a summation layer, and the initial image and the output of the C8 are summed pixel by pixel to obtain a final image reconstruction result.
The seventeenth layer is a loss function layer, calculates the error between the output of the sixteenth layer and the label image in the training sample, and then updates the network parameters through a back propagation and gradient descent algorithm.
And 3, constructing a large-scale simulation ISAR training data set.
Designing a moving target three-dimensional model in electromagnetic simulation software FEKO, constructing an electromagnetic simulation environment of a radar irradiation target, setting corresponding radar parameters, and calculating to obtain a simulated radar echo. First, a three-dimensional model of the moving object is constructed. The method comprises the following specific steps:
constructing a target simulation model: and constructing a three-dimensional model of the Yak-42 by utilizing the geometric module in the FEKO. The material of the model is designed into an ideal electric conductor, and the size of the model is consistent with that of the real Yak-42. The right side of fig. 2 shows an image of the simulated Yak-42 model. It is noted that the actual measurement radar data available for the experiment of the invention is a Yak-42 airplane, so the invention constructs the simulated Yak-42 radar data to ensure that the imaging target in the large-scale simulation training data set is consistent with the imaging target in the small-scale actual measurement training data set.
Dividing the surface mesh of the model: and dividing the surface of the model into a plurality of triangular meshes, wherein the side length of each triangle of each mesh is 0.042 m.
Setting radar parameters: it is assumed that the simulation model is placed in a global coordinate system (OXYZ), as shown in FIG. 3. The phantom is illuminated by a plane wave. Under far-field assumption, the reflected echoes of the simulation model are calculated by the majority P0 method and are collected by receivers located around the target model. The frequency band of the transmitted plane wave is 8.75 GHz-9.25 GHz, and the frequency sampling number is 100. The elevation angle theta starts at 45 deg. and ends at 90 deg., the increment is 1 deg., and the number of pitch samples is 45 deg.. For each sample of pitch angle θ, the azimuth angle is 360 °. The azimuth sampling interval is 0.05 °. The reflected echo azimuth is processed every 5 °.
In one process, two approaches are used to accumulate the target scatter echo. The first is to apply a two-dimensional FFT to the echoes to obtain an image of the simulated target, called the label image. FIG. 3 shows a partial label image of a simulation target. The second way is to randomly down-sample the echoes in both the range and azimuth directions two-dimensionally and then apply a two-dimensional FFT to obtain an image of the simulated target, referred to as the initial image. The quality of the initial image obtained is poor due to the fact that the down-sampled echoes cannot be completely coherently accumulated.
A label image and an initial image together constitute a training sample. Thus, for each sample of pitch angle θ, a total of 72 training samples are available.
From the above steps, we generated 3240(45 × 72) training samples in total. And dividing samples with the pitch angles of 50 degrees, 60 degrees, 70 degrees, 80 degrees and 90 degrees into a verification set, and dividing the rest samples into a training set. The dividing mode enables the pitching angles of the samples in the training set and the verification set to be uniformly distributed between 45 degrees and 90 degrees, and further reduces the difference of the distribution of the samples in the training set and the verification set. The details of the simulation data set are shown in Table 1-1.
TABLE 1-1 Radar training data set 1
Data set Data size Training set Verification set
Simulation data set 100×100 2880(88%) 360(12%)
And 4, pre-training the G-FCNN by combining a back propagation algorithm and a gradient descent algorithm after a loss function form is given based on the large-scale simulation ISAR training data set generated in the step 3.
And giving a loss function required by pre-training the G-FCNN, and updating network parameters through a training algorithm. The invention takes the minimum mean square error function as a loss function:
Figure BDA0003259282900000081
where i represents the ith training sample.
Figure BDA0003259282900000082
Is the result of the reconstruction output by the network,
Figure BDA0003259282900000083
representing the initial image in the ith training sample, W represents the set of network parameters, f (-) represents the function of the CNN network description, σiRepresenting the label image of the ith sample. L ({ W }) represents the reconstructed mean square error.
And updating network parameters by combining batch gradient descent and back propagation algorithms, and pre-training the G-FCNN. The hyper-parameter settings for the pre-training process are shown in tables 1-2.
TABLE 1-2 hyper-parameter settings 1 for G-FCNN training
Training process Learning rate Learning strategy Training interval Number of iterations
Pre-training 10e-5 Fixed learning rate 1440 180
And 5, carrying out fine adjustment on the network layer on the G-FCNN pre-trained in the step 4 by utilizing a small-scale actual measurement ISAR training data set to obtain the G-FCNN with optimal performance for the imaging target task.
The small-scale ISAR actual measurement training data set is a set formed by ISAR actual measurement target images. Each training sample comprises an initial image of ISAR measured target data and a corresponding label image. In the fine tuning process, the initial image is used as input data of the pre-trained G-FCNN. 700 training samples were constructed by the above method. The training samples are divided into a training set and a verification set, wherein the training set comprises 600 samples, and the verification set comprises 100 samples. The details of the measured data set are shown in Table 2-1.
TABLE 2-1 Radar training data set 2
Data set Data size Training set Verification set
Measured data set 100×100 600(86%) 100(14%)
And (3) fine-tuning the pre-trained G-FCNN by using a minimum mean square error function in a small-scale ISAR actual measurement training data set combined formula (2) to obtain the optimal G-FCNN for the target imaging task. The hyper-parameter settings for the trimming process are shown in table 2-2.
TABLE 2-2.G-FCNN trained hyper-parameter settings 2
Training process Learning rate Learning strategy Training interval Number of iterations
Fine tuning 10e-5 Fixed learning rate 300 50
And 6, realizing unknown target radar data imaging by utilizing the G-FCNN.
And (3) carrying out two-dimensional random down-sampling on the ISAR echo data in the distance direction and the azimuth direction, and obtaining an initial image after two-dimensional FFT. And taking the obtained initial image as an input of the trained G-FCNN, and outputting the G-FCNN as a final imaging result.
And finally, comparing the result of the G-FCNN imaging with the imaging result of the optimal CNN-based ISAR imaging method in detail in the aspects of imaging quality, imaging characteristics and the like to give qualitative and quantitative comparative analysis results.
When the method is implemented, the method is mainly divided into a pre-training stage and a fine-tuning stage.
First, a G-FCNN network architecture needs to be built, as shown in fig. 1. The G-FCNN comprises a contraction portion (C1 to FP3) and an expansion portion (C5 to C8). The G-FCNN has a three-level structure, and each level establishes a link between a network shallow layer and a network deep layer through the SK, and is used for directly transmitting the OFRs to the network deep layer. Meanwhile, the SK can also reduce the gradient disappearance phenomenon when the network reversely propagates the gradient, and is beneficial to updating the shallow layer parameters of the network. In addition, the gate unit is introduced into the G-FCNN to strengthen the useful characteristics of the OFRs and weaken the ineffective characteristics.
In the pre-training phase of the G-FCNN, a simulated ISAR training dataset of the pre-trained G-FCNN is constructed.
A three-dimensional model of Yak-42 is first constructed, as shown in fig. 2. The simulation Yak-42 has the same scale as the actual Yak-42. The material of the model is set to be an ideal electric conductor.
Then, the surface of the model is subjected to mesh division, and the divided surface of the model is formed by a plurality of triangular plates. And then, placing the simulated three-dimensional target under a global coordinate system OXYZ, setting radar parameters, and obtaining the scattering echo of the simulated target.
And finally, processing the scattering echo to obtain an initial image and a label image. Fig. 3 shows a partial tag image generated by simulating a target radar echo.
One initial image and one label image constitute one training sample. And dividing all the obtained training samples into a training set and a verification set for G-FCNN pre-training.
Designing a pre-trained hyper-parameter and a loss function, and pre-training the G-FCNN. The pre-trained G-FCNN can be merged into the prior information of the simulation Yak-42, such as the structural feature information.
And in the G-FCNN fine tuning stage, retraining the G-FCNN by using a small-scale actually measured ISAR data set on the basis of the pre-trained G-FCNN. The training samples in the measured ISAR dataset are composed of measured ISAR images. And similarly, evaluating the reconstruction result by adopting a minimum mean square error loss function, reversely propagating the error, and further finely adjusting the network parameters of the G-FCNN by a random gradient descent algorithm.
And imaging the unknown actually measured ISAR data by using the fine-tuned G-FCNN. During imaging of the G-FCNN, an initial image obtained by two-dimensional FFT of two-dimensional random down-sampled echo data is used as the input of the G-FCNN, and the output of the G-FCNN is the final imaging result.
Examples of the embodiments
To verify the validity of the gate units introduced in the G-FCNN, the G-FCNN imaging results are compared to the CNN and FCNN imaging results. In addition, to illustrate the effectiveness of the simulation data and the advantages of TL policies, the G-FCNN obtained through TL is referred to as G-FCNNsr. The G-FCNNs trained separately from the simulated training dataset and the measured training dataset are referred to as G-FCNNs and G-FCNNr.
As shown in table 3, two sets of Yak-42 aircraft data different from the data in the measured training data set, referred to as aircraft data 1 and aircraft data 2, are used to verify the imaging performance of the depth imaging network according to the present invention. The two sets of data were down-sampled by 25% and 10%, respectively.
TABLE 3 measured Radar data parameters used to verify G-FCNN Performance
Yak-42 aircraft data Data size Sampling rate
Aircraft data
1 100×100 25%(2500)
Aircraft data 2 100×100 10%(1000)
Fig. 4 (a) shows the result of imaging the full data of the airplane data 1 by the conventional RD method. FIGS. 4 (b) - (f) are the results of imaging 25% of the Yak-42 aircraft data 1 with the G-FCNNs, G-FCNNsr, G-FCNNr, FCNN, and CNN imaging methods.
It can be seen intuitively from (b) in fig. 4 that sparse imaging of measured Yak-42 aircraft data is feasible using G-FCNNs trained with simulation data. G-FCNNs can provide better images of aircraft targets because G-FCNNs learn the general characteristics of the appearance, structure, etc. of a simulated Yak-42 aircraft from the simulated training data set.
As can be readily seen by comparing (b) and (c) in FIG. 4, the G-FCNNsr reconstructed airplane edge is better than the G-FCNNs reconstructed. This is because the G-FCNNsr incorporates the generic features of the Yak-42 aircraft model, and thus can guide the G-FCNNsr to better learn the features specific to the measured Yak-42 aircraft from the measured training data set in the fine tuning phase. The unique features contain more accurate detail information of the Yak-42 aircraft than the generic features, thereby providing better reconstruction of the target edges.
Comparing (d), (e) and (f) in fig. 4, it can be seen that the number of spurious scatter points in the G-FCNNr result is small relative to the number of spurious scatter points in the FCNN and CNN results. This is because the learned gate cells in the G-FCNNr can autonomously strengthen the valid OFRs, weaken the useless OFRs, and thus deliver more accurate shallow features and suppress more false scatter points from appearing in the target image.
FIG. 5 is a target image of the aircraft data 2 obtained by the RD imaging method and the G-FCNN, CNN, and FCNN imaging methods. Comparing (b) and (c) in FIG. 5, it can be seen that the G-FCNNsr reconstruction results in better target edges than the G-FCNNs reconstruction results. Comparing (d), (e) and (f) in fig. 5, the reconstruction result of the G-FCNNr has the least number of false scattering points, as shown by the blue circle in the figure, which also verifies the validity of the gate unit in the G-FCNN.
Besides the visual effect comparison of the imaging results, the invention carries out quantitative evaluation on the imaging results. The image evaluation function includes a "true value" based image evaluation function and a conventional image evaluation function. The evaluation indexes based on the "true value" specifically include: FA (False Alarm) and RRMSE (Relative Root Mean Square Error). FA is used for evaluating the number of scattering points of error reconstruction, and RRMSE is used for evaluating the reconstruction error of the amplitude of the scattering points. Because there is no true value image, the well-focused high-quality RD image obtained by adopting the full data in the experiment is used as the 'true value' image, and the actual measurement is the quality evaluation of all the methods relative to the RD imaging result. Conventional imaging quality assessment indicators include: TCR (Target-to-chopper Ratio), ENT (Image Entropy) and IC (Image Contrast).
The imaging results in fig. 4 and 5 were quantitatively evaluated using the above-described image evaluation function, and the evaluation results are shown in table 4.
TABLE 4 quality evaluation of reconstructed images from different imaging methods
Image forming method FA RRMSE TCR(dB) ENT IC
G-FCNNs 72 0.2681 59.1034 5.4339 8.7784
Yak-42 G-FCNNr 168 0.3948 50.2612 5.5057 8.3654
Data set 1 G-FCNNsr 32 0.2574 68.8043 5.1078 9.8582
FCNN 85 0.3366 50.2566 5.5600 8.3574
CNN 86 0.3416 50.3361 5.6621 8.1222
G-FCNNs 31 0.2691 66.7447 5.0505 10.8862
Yak-42 G-FCNN r 60 0.1986 60.9772 5.3935 8.8795
Data set 2 G-FCNNsr 23 0.1960 69.3828 5.2013 9.4995
FCNN 88 0.3412 60.2235 5.2331 10.3365
CNN 88 0.3412 60.2235 5.2331 10.3365
As can be seen from Table 4, the FA and RRMSE values of the G-FCNNsr-based imaging method are minimum compared with those of other methods, which indicates that the number of false scatter points in the image reconstructed by the G-FCNNsr-based imaging method is minimum and the amplitude reconstruction error is minimum. As can be seen from columns 4, 5, and 6 of Table 4, the imaging results based on G-FCNNsr are highest in TCR and IC and lowest in ENT. This is consistent with the visual contrast results of the images shown in fig. 4 and 5.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (9)

1. An inverse synthetic aperture radar imaging method combining gate unit and transfer learning, comprising the steps of:
s1, constructing a Full Convolution Neural Network (FCNN): constructing an FCNN of inverse synthetic aperture radar ISAR down-sampling data imaging by using a convolution layer, a maximized pooling layer, a BN layer, an activation function layer, an inverse convolution layer and a jump connection SK link;
s2, constructing G-FCNN: introducing a gate unit GU on the basis of the FCNN to construct G-FCNN; the G-FCNN inputs an initial image obtained by ISAR downsampling data through two-dimensional Fast Fourier Transform (FFT), and outputs a final ISAR imaging result;
s3, constructing an electromagnetic simulation environment of a radar irradiation target by using electromagnetic simulation software, setting corresponding radar parameters, calculating to obtain a simulated radar echo, and constructing a large-scale ISAR simulation training data set;
s4, pre-training the G-FCNN by combining a back propagation and gradient descent algorithm after a loss function form is given based on the large-scale ISAR simulation training data set generated in the step S3;
s5, fine tuning parameters in the G-FCNN pre-trained in the step S4 by utilizing a small-scale ISAR actual measurement training data set to obtain the G-FCNN with optimal parameters;
s6, imaging of unknown ISAR down-sampling data is achieved through the G-FCNN, an initial image obtained through two-dimensional FFT of the ISAR down-sampling data is used as the input of the fine-tuned G-FCNN, and the output of the G-FCNN is the final imaging result.
2. The method according to claim 1, wherein the step S1 of constructing the FCNN specifically comprises:
firstly, extracting a plurality of feature data by using a convolution kernel of 3 multiplied by 3, reducing the deviation of the feature data and input data by using a BN layer and an activation function layer, and reducing the dimension of the feature data by using a maximized pooling layer with an operation kernel of 2 multiplied by 2;
secondly, in the characteristic data reconstruction process, reconstructing the characteristic data by using a deconvolution layer, wherein the size of a deconvolution kernel is 2 multiplied by 2, and simultaneously reducing errors between the reconstructed characteristic data and the dimension-reduced characteristic data by using a BN layer and a ReLU activation function;
the FCNN has a three-level structure, each level adopts SK to establish mapping of a network shallow layer and a network deep layer, SK is added to the last layer of the network, and the initial image and feature data reconstructed by the network are summed.
3. The method according to claim 1, wherein the G-FCNN is constructed in step S2 as follows:
on the basis of FCNN, a 1 × 1 convolutional layer is adopted to realize a gate unit mechanism:
Figure FDA0003259282890000011
wherein, BsThe network shallow feature representing cascade in the FPs layer represents OFRs, wherein s is 1,2 and 3 represent series,
Figure FDA0003259282890000012
representing a convolution kernel of a 1 x 1 gate function,
Figure FDA0003259282890000013
a weight parameter representing the convolution operation,
Figure FDA0003259282890000014
indicating deviation, weight parameter
Figure FDA0003259282890000015
The amount of OFRs that can be passed further forward in stage s is controlled,
Figure FDA0003259282890000016
the OFRs that pass through the gate cells in s-stage are shown.
4. The method of claim 1, wherein the large scale ISAR simulation training dataset is constructed in step S3 as follows:
a three-dimensional model similar to an actual measurement Yak-42 target is designed by using the CADFEKO in the electromagnetic simulation software FEKO, the scale of the simulation model is consistent with that of the actual measurement target, the material of the simulation target is set as an ideal electric conductor,
in FEKO, a simulation model is placed in a global coordinate system OXYZ, radar parameters are set, a grid is divided on the surface of the simulation model, a target scattered echo calculation method is set, and simulated radar echoes are calculated.
5. The method of claim 4, wherein the received echoes are calculated in two ways:
the first mode is that two-dimensional FFT is directly carried out on the reflected echo to obtain an image of a simulation target, and the target image in the first mode is called a label image;
the second mode is that the reflection echo is randomly down-sampled in the distance direction and the azimuth direction, two-dimensional FFT is carried out on the down-sampled data to obtain an image of a simulation target, and the target image obtained in the mode is called as an initial image;
and a training sample in the ISAR simulation training data set is formed by a pair of initial images and a pair of label images, and all simulation training samples are divided into a training set and a verification set for pre-training the G-FCNN.
6. The method of claim 1, wherein in step S4, the G-FCNN is pre-trained by using a minimum mean square error function as a loss function, and after setting the loss function, updating the G-FCNN parameters in combination with a batch gradient descent and back propagation algorithm.
7. The method according to claim 6, wherein in step S4, the minimum mean square error function is used as the loss function, and specifically:
Figure FDA0003259282890000021
wherein i represents the ith training sample,
Figure FDA0003259282890000022
is the result of the reconstruction output by the network,
Figure FDA0003259282890000023
represents a primitive image in the ith training sample, { W } represents a set of network parameters, f (-) represents a function of the CNN network description, σiThe labeled image of the ith sample is shown, and L ({ W }) shows the reconstructed mean square error.
8. The method according to claim 7, wherein in step S5, the small-scale ISAR measured training data set is a set of ISAR measured target images, and each training sample in the measured training data set also includes an ISAR measured data initial image and a label image;
and (4) combining the small-scale ISAR actual measurement training data sets with the minimum mean square error function of the step S4, and finely adjusting the pre-trained G-FCNN to obtain the optimal G-FCNN facing the unknown ISAR target imaging task.
9. The ISAR imaging method combined with the gate unit and the TL as claimed in claim 1, wherein in step S6, the G-FCNN is used to realize target data imaging measured by the ISAR, the ISAR data after motion compensation is randomly down-sampled in distance direction and azimuth direction, the down-sampled data is two-dimensionally FFT to obtain an initial image, the initial image is used as an input of the fine-tuned G-FCNN, and the output of the G-FCNN is the final imaging result.
CN202111067939.6A 2021-09-13 2021-09-13 Inverse synthetic aperture radar imaging method combining gate unit and transfer learning Active CN113985408B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111067939.6A CN113985408B (en) 2021-09-13 2021-09-13 Inverse synthetic aperture radar imaging method combining gate unit and transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111067939.6A CN113985408B (en) 2021-09-13 2021-09-13 Inverse synthetic aperture radar imaging method combining gate unit and transfer learning

Publications (2)

Publication Number Publication Date
CN113985408A true CN113985408A (en) 2022-01-28
CN113985408B CN113985408B (en) 2024-04-05

Family

ID=79735701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111067939.6A Active CN113985408B (en) 2021-09-13 2021-09-13 Inverse synthetic aperture radar imaging method combining gate unit and transfer learning

Country Status (1)

Country Link
CN (1) CN113985408B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792115A (en) * 2022-05-17 2022-07-26 哈尔滨工业大学 Telemetry signal outlier removing method, device and medium based on deconvolution reconstruction network
CN116051925A (en) * 2023-01-04 2023-05-02 北京百度网讯科技有限公司 Training sample acquisition method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190318806A1 (en) * 2018-04-12 2019-10-17 Illumina, Inc. Variant Classifier Based on Deep Neural Networks
CN112053004A (en) * 2020-09-14 2020-12-08 胜斗士(上海)科技技术发展有限公司 Method and apparatus for time series prediction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190318806A1 (en) * 2018-04-12 2019-10-17 Illumina, Inc. Variant Classifier Based on Deep Neural Networks
CN112053004A (en) * 2020-09-14 2020-12-08 胜斗士(上海)科技技术发展有限公司 Method and apparatus for time series prediction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHANGYU HU等: "FCNN-Based ISAR Sparse Imaging Exploiting Gate Units and Transfer Learning", IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, vol. 19 *
YING TAI等: "MemNet: A Persistent Memory Network for Image Restoration", 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, pages 4549 - 4557 *
杨其利: "基于深度学习的红外弱小目标检测研究", 中国优秀硕士学位论文电子期刊 工程科技II辑, no. 02 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792115A (en) * 2022-05-17 2022-07-26 哈尔滨工业大学 Telemetry signal outlier removing method, device and medium based on deconvolution reconstruction network
CN116051925A (en) * 2023-01-04 2023-05-02 北京百度网讯科技有限公司 Training sample acquisition method, device, equipment and storage medium
CN116051925B (en) * 2023-01-04 2023-11-10 北京百度网讯科技有限公司 Training sample acquisition method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113985408B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN109212526B (en) Distributed array target angle measurement method for high-frequency ground wave radar
CN103713288B (en) Sparse Bayesian reconstruct linear array SAR formation method is minimized based on iteration
CN113985408B (en) Inverse synthetic aperture radar imaging method combining gate unit and transfer learning
CN110068805B (en) High-speed target HRRP reconstruction method based on variational Bayesian inference
CN108594228B (en) Space target attitude estimation method based on ISAR image refocusing
CN110161499B (en) Improved sparse Bayesian learning ISAR imaging scattering coefficient estimation method
CN102346249B (en) Implementation method for wide swath earth observation step scanning mode of synthetic aperture radar
CN104635230B (en) Method for MIMO (multi-input multi-output)-SAR (synthetic aperture radar) near field measurement imaging azimuth side lobe suppression
CN104833974B (en) The SAR Imaging fasts rear orientation projection method of compression is composed based on image
CN104459666B (en) Missile-borne SAR echo simulation and imaging method based on LabVIEW
Kang et al. Efficient synthesis of antenna pattern using improved PSO for spaceborne SAR performance and imaging in presence of element failure
CN113126087B (en) Space-borne interference imaging altimeter antenna
CN112198506B (en) Method, device and system for learning and imaging ultra-wideband through-wall radar and readable storage medium
CN113158485B (en) Electromagnetic scattering simulation method for electrically large-size target under near-field condition
CN107589421B (en) Array foresight SAR imaging method
CN109669184A (en) A kind of synthetic aperture radar azimuth ambiguity removing method based on full convolutional network
Dai et al. SRCNN-based enhanced imaging for low frequency radar
CN112859075A (en) Multi-band ISAR fusion high-resolution imaging method
CN113671485B (en) ADMM-based two-dimensional DOA estimation method for meter wave area array radar
CN112946599B (en) Radar space spectrum estimation method based on sparse array
CN114325707A (en) Sparse aperture micro-motion target ISAR imaging method based on depth expansion network
Hu et al. FCNN-based ISAR sparse imaging exploiting gate units and transfer learning
CN110927704B (en) Signal processing method for improving angle resolution of radar
CN114994674A (en) Intelligent microwave staring correlated imaging method, equipment and storage medium
CN104483670B (en) SAR (synthetic aperture radar) echo simulation method based on GPU (ground power unit)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant