CN117148347A - Two-dimensional joint imaging and self-focusing method based on deep learning network - Google Patents

Two-dimensional joint imaging and self-focusing method based on deep learning network Download PDF

Info

Publication number
CN117148347A
CN117148347A CN202310698121.7A CN202310698121A CN117148347A CN 117148347 A CN117148347 A CN 117148347A CN 202310698121 A CN202310698121 A CN 202310698121A CN 117148347 A CN117148347 A CN 117148347A
Authority
CN
China
Prior art keywords
matrix
dimensional
representing
imaging
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310698121.7A
Other languages
Chinese (zh)
Inventor
杨军
吕明久
吴瑕
马建朝
程祺
刘诗钊
陈文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Early Warning Academy
Original Assignee
Air Force Early Warning Academy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Early Warning Academy filed Critical Air Force Early Warning Academy
Priority to CN202310698121.7A priority Critical patent/CN117148347A/en
Publication of CN117148347A publication Critical patent/CN117148347A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes
    • G01S13/9064Inverse SAR [ISAR]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the field of inverse synthetic aperture radar image processing, in particular to a two-dimensional joint imaging and self-focusing method based on a deep learning network, which is used for ISAR imaging and self-focusing by establishing a two-dimensional downsampling echo model under a two-dimensional sparse condition, and is used for matrixing l of the vectorization two-dimensional downsampling echo model for obtaining higher efficiency 1 Norm optimization, finally, the proposed matrixing l 1 The norm optimization algorithm is unfolded into a neural network structure, the simulation data set is used for carrying out network training in a complex domain by using a back propagation algorithm, the trained neural network is used for ISAR imaging, and the imaging effect is improved on the premise of reducing the operation amount.

Description

Two-dimensional joint imaging and self-focusing method based on deep learning network
Technical Field
The invention relates to the field of inverse synthetic aperture radar image processing, in particular to a two-dimensional joint imaging and self-focusing method based on a deep learning network.
Background
Inverse Synthetic Aperture Radar (ISAR) has the ability to acquire two-dimensional (2D) images of non-cooperative moving objects, is widely used in modern military and civilian fields, and accurate motion compensation is a prerequisite for high resolution ISAR imaging. If the target motion is not exactly compensated, the residual motion will cause residual phase errors to occur, causing defocusing of the imaging result.
Today, deep Learning (DL) techniques have been introduced in the area of ISAR imaging and have proven to be effective in acquiring high resolution images by training a large number of data sets to design a depth network. It is also called a data driving method. The most common imaging method based on deep learning generally uses a traditional neural network structure widely applied in the fields of computer vision and the like to realize the transition from a low-resolution image to a high-resolution image. Therefore, the Convolutional Neural Network (CNN), the fully-connected CNN (FCNN), the UNet and other typical neural networks are applied to ISAR imaging, high-resolution results superior to the traditional method are obtained,
in the prior art, the method comprises the steps of,
[ S.Wei, J.Liang, M.Wang, J.Shi, X.Zhang, andJinheRan. "AF-AMPNet: adeep learningapproachforsparseapertureISARimagingandautofocusing," IEEETrans. Geosci. RemoteSens., vol.60, pp.5206514, apr.2022.] proposes an AMP-based imaging and phase error estimation depth network;
similar work was studied in [ X.Li, X.Bai, andF.Zhou, high-resolution isar imaging and da utofocusingvia2D-ADMM-Net, remotesens, vol.13, no.12, pp.2326, jun.2021 ], where ADMM was deployed as a deep network structure by combining with a phase error compensation network.
However, these networks only consider phase error estimates in the span direction. In fact, for RSFISAR imaging, the phase error exists in the distance and azimuth two-dimensional space, which needs to be compensated together.
Disclosure of Invention
In order to solve the problem of larger phase error of a distance and azimuth two-dimensional space, the invention provides a two-dimensional combined imaging and self-focusing method based on a deep learning network, which comprises the following steps:
step S1, a two-dimensional downsampling echo model containing two-dimensional phase errors is established;
s2, matrixing the two-dimensional downsampled echo model 1 The norm optimization, including,
step S21, fixing an error matrix E in the two-dimensional downsampled echo model and carrying out l on a sparse scene matrix X in the two-dimensional downsampled echo model 1 Optimizing norms;
step S22, fixing the sparse scene matrix X, and carrying out l on the error matrix 1 Optimizing norms;
step S23, through alternate iterative optimization of S21 and S22, an optimized sparse scene matrix X and an optimized error matrix E are finally obtained;
step S3, expanding each sub-step in the step S2 into a neural network structure;
and S4, inputting a simulation data set into the neural network structure to perform network training, obtaining optimal parameters of a model, and imaging measured data based on the network model trained by the simulation data.
Further, in step S1, the two-dimensional downsampled echo model is represented by equation (1),
in the formula (1), the components are as follows,y represents a two-dimensional sparse echo data matrix, F r Representing a distance-wise random downsampling matrix, +.>Represents an azimuthal random downsampling matrix, H' r Dictionary matrix representing distance direction, H' a Dictionary matrix representing azimuth, W representing noise matrix, E' representing error matrix, X representing sparse scene matrix, E representing Hadamard product, Φ r =F r H′ r ,Φ a =F a H′ a ,/>
Further, in the step S2, vectorizing the two-dimensional downsampled echo model is further included, wherein,
vectorization of the vectorized two-dimensional downsampled echo model is represented by (3),
in the formula (3), A is Kronecker product, x represents vectorized sparse scene matrix, w represents sparse noise matrix,
further, in the step S2, the two-dimensional downsampled echo model is matrixed l 1 Norm optimization, wherein a norm optimization function is obtained,
in the formula (4), a is a Lagrangian multiplier, d is a penalty parameter, wherein <' > represents an inner product, v is an auxiliary variable, lambda represents a first regularization parameter, and delta represents a second regularization parameter.
Further, in the step S21, l is performed on the sparse scene matrix X in the two-dimensional downsampled echo model 1 The norm optimization, denoted as,
wherein Z (·) represents a contraction function, X p+1 Represents a sparse matrix after the optimization of the p+1st cycle, A p+1 Representing a Lagrangian multiplier matrix after the p+1st cycle optimization, the Lagrangian multiplier matrix being obtained based on matrixing of Lagrangian parameters, V p+1 Represents an auxiliary variable matrix after the p+1st cycle optimization, which is obtained based on the auxiliary variable matrixing, u=Σ (E H )⊙vec(Y),X(E H ) Representing the vector e H Conversion to matrix E H Diagonal element of (a) (·) H Represents conjugate transpose (X) * Representing a convolution operation.
Further, in the step S22, l is performed on the error matrix 1 The norm optimization, denoted as,
in the formula (6), Y m,h Represents the mth column of the m-th row of the two-dimensional sparse echo data matrix, p represents the p-th cycle,representing two-dimensional phase error in the p-th cyclic echo signal, (g) m,g Represents row m (g) of the matrix g,h Representing the h column of the matrix.
Further, in the step S3, the algorithm implementation steps in the step S21 and the step S22 are expanded into deep learning network results, and the structure of the single neural network layer in the neural network structure mainly includes: an X module for realizing the update of the sparse scene matrix X in step S21 and step S22, a Z module for realizing the update of the auxiliary variable matrix V in step S21 and step S22, an a module for realizing the update of the lagrangian matrix a in step S21 and step S22, and an E module for realizing the update of the phase error matrix E in step S21 and step S22.
Further, in said step S4, the network parameters are trained by inputting the simulation data set, the best optimization parameters are determined by the loss function, expressed as,
in the formula (7), Q is the number of training data in the simulation training set,representing the reconstruction result of the q-th data, X li For label imaging results, Θ= { l l ,d l And } represents the parameters that need to be optimized.
Compared with the prior art, the method is used for ISAR imaging and automatic focusing by establishing the two-dimensional downsampling echo model under the two-dimensional sparse condition, and the vectorization two-dimensional downsampling echo model is matrixed in order to obtain higher efficiency 1 Norm optimization, finally, the proposed matrixing l 1 The norm optimization algorithm is unfolded into a neural network structure, the simulation data set is used for carrying out network training in a complex domain by using a back propagation algorithm, the trained neural network is used for ISAR imaging, and the imaging effect is improved on the premise of reducing the operation amount.
In particular, the invention fully utilizes two-dimensional coupling information to construct a two-dimensional downsampling echo model under two-dimensional sparse condition for joint self-focusing and high-resolution ISAR imaging, and matrices l 1 The norm optimization algorithm is converted into a network expansion form, so that optimal network parameters are obtained, the calculation and memory requirements are reduced, and the imaging quality is improved.
Drawings
FIG. 1 is a step diagram of a two-dimensional joint imaging and self-focusing method based on a deep learning network according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a neural network according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a neural network in a single layer structure according to an embodiment of the invention;
FIG. 4 is a graph showing the contrast of the label image with the test data obtained by the RD algorithm according to the embodiment of the invention;
FIG. 5 is a graph of imaging results under different 2DSPRs conditions using different methods for an embodiment of the invention with a signal to noise ratio of 10 dB;
FIG. 6 is a graph showing imaging results of different methods under different SNR conditions when the 2DSPR of the inventive example is (0.7);
FIG. 7 is a graph comparing a tag image of measured Yak-42 aircraft data with partial data Yak-42 data with 2D phase error for an embodiment of the invention;
FIG. 8 is a graph showing the imaging results of a Yak-42 aircraft under different 2DSPRs conditions using different methods for an SNR of 10 dB;
FIG. 9 is a graph of the results of imaging a Yak-42 aircraft at different signal to noise ratios using different methods according to embodiments of the invention;
FIG. 10 is a graph of the imaging results of Boeing-727 data obtained using the RD method of the inventive example;
FIG. 11 is a graph showing the results of imaging a Boeing 727 aircraft under different 2DSPRs conditions by different methods according to embodiments of the invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, in the description of the present invention, terms such as "upper," "lower," "left," "right," "inner," "outer," and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
Referring to fig. 1, which is a step diagram of a two-dimensional joint imaging and self-focusing method based on a deep learning network according to an embodiment of the present invention, the two-dimensional joint imaging and self-focusing method based on a deep learning network of the present invention includes:
step S1, a two-dimensional downsampling echo model containing two-dimensional phase errors is established;
s2, matrixing the two-dimensional downsampled echo model 1 The norm optimization, including,
step S21, fixing an error matrix E in the two-dimensional downsampled echo model and carrying out l on a sparse scene matrix X in the two-dimensional downsampled echo model 1 Optimizing norms;
step S22, fixing the sparse scene matrix X, and carrying out l on the error matrix 1 Optimizing norms;
step S23, through alternate iterative optimization of S21 and S22, an optimized sparse scene matrix X and an optimized error matrix E are finally obtained;
step S3, expanding each sub-step in the step S2 into a neural network structure;
and S4, inputting a simulation data set into the neural network structure to perform network training, obtaining optimal parameters of a model, and imaging measured data based on the network model trained by the simulation data.
In particular, it will be appreciated by those skilled in the art that imaging radar systems based on stepped frequency signals require the transmission of Na bursts, each containing N narrowband sub-pulses, to obtain ISAR imaging results.
In step S1, the two-dimensional downsampled echo model is represented by (1),
in the formula (1), the components are as follows,y represents a two-dimensional sparse echo data matrix, F r Representing a distance-wise random downsampling matrix, F a T Represents an azimuthal random downsampling matrix, H' r Dictionary matrix representing distance direction, H' a Dictionary matrix representing azimuth, W representing noise matrix, E' representing error matrix, X representing sparse scene matrix, E representing Hadamard product, Φ r =F r H′ r ,Φ a =F a H′ a ,/>
Representing a sparse scene that needs to be restored, expressed as:
representing a two-dimensional error matrix, expressed as:
specifically, in step S1, the two-dimensional downsampled echo model is represented by equation (1),
in the formula (1), the components are as follows,y represents a two-dimensional sparse echo data matrix, F r Representing a distance-wise random downsampling matrix, +.>Represents an azimuthal random downsampling matrix, H' r Dictionary matrix representing distance direction, H' a Dictionary matrix representing azimuth, W representing noise matrix, E' representing error matrix, X representing sparse scene matrix, Φ r =F r H′ r ,Φ a =F a H′ a
In this embodiment, E' represents an error matrix, which can be expressed as:
x represents a sparse scene matrix and,
and in particular, vectorizing a two-dimensional downsampled echo model, wherein,
vectorization of the vectorized two-dimensional downsampled echo model is represented by (3),
in the formula (3), A is Kronecker product, x represents vectorized sparse scene matrix, E represents vectorized matrix of matrix E, w represents sparse noise matrix,
the vectorized two-dimensional downsampled echo model is represented by (3),
in the formula (4), diag {.cndot } represents a diagonal operation, and MH is the dimension of the matrix.
Specifically, in the step S2, the two-dimensional downsampled echo model is matrixed l 1 Norm optimization, wherein a norm optimization function is obtained,
in the formula (4), a is a Lagrangian multiplier, d is a penalty parameter, wherein <' > represents an inner product, v is an auxiliary variable, lambda represents a first regularization parameter, and delta represents a second regularization parameter.
Specifically, in the step S21, l is performed on the sparse scene matrix X in the two-dimensional downsampled echo model 1 The norm optimization, denoted as,
in the step S21, l is performed on the sparse scene matrix X in the two-dimensional downsampled echo model 1 The norm optimization, denoted as,
wherein Z (·) represents a contraction function, X p+1 Represents a sparse matrix after the optimization of the p+1st cycle, A p+1 Representing a Lagrangian multiplier matrix after the p+1st cycle optimization, the Lagrangian multiplier matrix being obtained based on matrixing of Lagrangian parameters, V p+1 Represents an auxiliary variable matrix after the p+1st cycle optimization, which is obtained based on the auxiliary variable matrixing, u=Σ (E H )⊙vec(Y),X(E H ) Representing the vector e H Conversion to matrix E H Diagonal element of (a) (·) H Represents conjugate transpose (X) * Representing a convolution operation.
Specifically, in the step S22, l is performed on the error matrix 1 The norm optimization, denoted as,
in the formula (6), Y m,h An mth column of an mth row representing a two-dimensional sparse echo data matrix, and p represents a p-th timeThe circulation is carried out,representing two-dimensional phase error in the p-th cyclic echo signal, (g) m,g Represents row m (g) of the matrix g,h Representing the h column of the matrix.
Specifically, in the step S3, the algorithm implementation steps in the step S21 and the step S22 are expanded into deep learning network results, and the structure of the single neural network layer in the neural network structure mainly includes: an X module, configured to implement updating of the imaging matrix X in step S21 and step S22, a Z module, configured to implement updating of the auxiliary matrix V in step S21 and step S22, an a module, configured to implement updating of the lagrangian matrix a in step S21 and step S22, and an E module, configured to implement updating of the phase error matrix E in step S21 and step S22, where the implementation formulas corresponding to the X module, the Z module, the a module, and the E module in the first layer network are expressed as:
in the formula (8), angle (x) represents a phase-taking operation,represents the h column data in the phase error matrix E in the layer-1 network () ·,h Represents the h column, delta of the matrix l-1 ,λ l-1 For the network parameters corresponding to the layer 1 network, the aim of the algorithm realized by the network is to obtain the optimal parameters delta of each layer of network through training l-1 And lambda is l-1
The U function module in the X module is used for updating the two-dimensional error matrix E according to the updated two-dimensional error matrix E l-1 To optimize the input Y, said E l-1 For the two-dimensional error matrix obtained by the first circulation, the output result U of the U function module in the first layer network l-1 The light-emitting device is represented by,
the Z function module in the layer-I network is used for updating the input V l-1 Obtain output V l The output result is expressed as,
V l (z l ;γ l )=sign(z l )ReLU{|z l |-β l ,0} (10)
in formula (10), z l =X l +A l-1l-1 ,X l Updating values for echo matrix to be restored in layer I, A l-1 For the updated value of the Lagrangian matrix, delta, in the l-1 layer network l-1 ,λ l-1 For the network parameters corresponding to the layer 1 network, gamma l-1 =λt l-1l-1 . ReLU (·) represents a linear rectification function.
In particular, in said step S4, the network parameters are trained by inputting the simulation data set, the best optimization parameters are determined by the loss function, expressed as,
in the formula (11), Q is the number of training data in the simulation training set,representing the reconstruction result of the q-th data, X li For label imaging results, Θ= { l l ,d l And the model can be used for imaging the measured data after training the simulation data set.
Specifically, please refer to fig. 2, which is a schematic diagram of a neural network structure of the present invention, fig. 3 is a diagram of a network structure in a layer network, wherein X (Vl, al, el; λl, δl) represents a reconstruction structure of X in a layer neural network, Z (Xl, al; λl, δl) represents a contraction function in a layer neural network, and E (Xl) and a (Xl, vl; δl) represent iterative computation in a layer neural network, respectively.
The invention verifies the imaging mode of the 2D-IADIA algorithm in the prior art and the imaging mode of the neural network structure in the invention, wherein the imaging mode of the neural network structure is expressed as 2D-IADIANet, and the effectiveness of the imaging mode is under the conditions of different two-dimensional sampling rates and signal to noise ratios;
specifically, the same training data set is used by the 2 diaadianet and the 2 dadnaet, q=40 scenes are generated by using a training data generation method, 100-500 scattering points are randomly generated in each scene, the intensity of the scattering points is subject to Gaussian random distribution, the size of the scene is 128×128, echo data are obtained through back-pushing of the scene data, and two-dimensional random phase errors are added; in addition, the first 20 sets of data are used to train the model, the last 20 sets of data are used to test the trained model, initial ρ 0 And delta 0 Set to 0.5 and 1, batch size set to 5, all experiments were processed on an AMDRyzen94900HCPU,16GB memory machine, the number of layers of network training set to l=10; in addition, 2D-IADIA, 2D-ADN, and FCNN are used as the comparison algorithm; the circulation times of the 2D-IADIA algorithm are set to be 150 times, and the 2D-ADN is also a depth sparse reconstruction network, but the network can only process a two-dimensional imaging model containing azimuth errors; FCNN is a training network of 'image-to-image', which is used for training the initial images only on the basis of obtaining initial ISAR imaging results, and the quality of the images is enhanced through the network; thus, additional generation of training data sets is required; according to the invention, an initial ISAR imaging image is obtained by using a traditional RD algorithm, 1000 simulated original image pairs are generated and used as an input data set and a label image, 600 scenes are used for network training, and the rest 400 scenes belong to a test set.
Example 1
Referring to fig. 4 and 5, fig. 4 is a graph comparing the label image of the embodiment of the invention with the image of the test data obtained by the RD algorithm, and fig. 5 is a graph of the imaging result of the embodiment of the invention under different conditions of 2DSPRs using different methods when the signal-to-noise ratio is 10dB, in this embodiment, on a trained model, the test is performed by using simulation data, the original label of the test data is shown in fig. 4 (a), a two-dimensional random phase error is added in the test data, the sparsity ratio is set to (0.9 ), and the imaging result obtained by using the conventional RD algorithm is shown in fig. 4 (b), which is obvious that the imaging image is covered by a large number of false reconstruction points, and it is difficult to realize accurate target identification.
For the simulation test data in fig. 4, with the addition of two-dimensional random phase error, SNR was set to 10dB, and the data of (0.4 ), (0.6,0.6) and (0.8) for 2dsp prs were imaged using the four different methods described above, the results are shown in fig. 5, and further, the corresponding evaluation indexes are given in table I.
TABLE 1 evaluation index of simulation data imaging results under different 2DSPRS conditions
In fig. 5, each column gives the imaging results of a different method, each row is the corresponding result for the same 2DSPR condition, it can be seen that the image obtained by 2D-ADNet has many false scattering points given the three sample rates; this is because the network can only handle azimuthal random phase errors, while for two-dimensional phase errors, reconstruction performance will be significantly affected; for FCNN, it appears to have good imaging quality and clean background; but compared to the real image, it can be found that FCNN-generated images lose a large number of weak scattering points, especially at low sampling rates. Although the imaging quality of 2D-IADIA and 2D-IADIANet gradually worsen with decreasing sampling rate, they have fewer false scattering points compared to other algorithms, indicating that they have good reconstruction performance; in contrast, 2D-IADIANet may further improve imaging performance thanks to adaptive and hierarchical optimization parameters learned through a training process; the evaluation index shown in table 1 also verifies the superior performance of the proposed network-based approach under given low sampling rate conditions; furthermore, comparison of computation times shows that network-based methods generally have a high processing efficiency. In particular, the traditional network, i.e., FCNN, has the shortest processing time; as a network expansion of 2D-IADIA, 2D-IADIA net also greatly improves computational efficiency and achieves similar performance, indicating potential in real-time processing.
Example 2
In this embodiment, the source and arrangement of the simulation data are the same as those of embodiment 1,
further testing the effect of SNR on the proposed depth network imaging performance; maintaining (0.7) the sampling rate of the echo data, i.e. about 49% of the echo data is available; imaging results under different noise conditions using different methods are shown in fig. 6; for visual comparison of imaging results, table 2 shows the corresponding evaluation index for the four methods at signal to noise ratios of 5dB and 0dB, respectively.
TABLE 2 comparison of simulation data imaging results Performance index under different SNRs conditions
2D-ADNet cannot handle 2D phase errors, which have the worst imaging quality, especially at low signal-to-noise ratios (snr=0 dB). The indices in table 2 also verify the above conclusion. FCNN can achieve good autofocus performance but there are many scattering point defects and a large number of pseudo weak scattering points. The possible reason is that FCNN has no corresponding phase error correction network, only establishes an internal mapping relationship between the tag image and the test image, and easily deletes the true weak scattering point, and retains the false scattering point. In contrast, 2D-IADIA and its network extensions can obtain well-focused images. Because they have special 2D phase error self-correcting operations. Compared with 2D-IADIA, 2D-IADIA achieves optimal parameters through network learning rather than manual setting, and has better noise adaptability. In particular, when the signal-to-noise ratio is 0dB, the performance of the network-based approach is significantly improved. The evaluation index in table 2 further reveals the advantages of the proposed network-based approach.
Example 3
Referring to fig. 6, fig. 7, fig. 8, fig. 9, fig. 6 is an imaging result graph of the invention embodiment with 2dsp r (0.7) under different SNR conditions, fig. 7 is an imaging comparison graph of tag image of measured Yak-42 aircraft data and incomplete data Yak-42 data with 2D phase error of the invention embodiment, fig. 8 is an imaging result graph of Yak-42 aircraft under different 2dsp rs conditions with different methods when SNR of the invention embodiment is 10dB, fig. 9 is an imaging result graph of Yak-42 aircraft under different signal to noise ratio with different methods of the invention embodiment, in this embodiment, measured Yak-42 aircraft data is measured to test performance of the proposed network, it should be noted that the proposed network has undergone training of simulation data to further verify generalization performance of the proposed method; since the raw measured data has a high signal-to-noise ratio, it is assumed that the measured data has no noise and the data dimension is 128×128; for ease of comparison, given that the data is complete and there is no 2D phase error, the conventional RD imaging results are shown in fig. 7 (a), the result of fig. 7 (a) can also be used as a label image; further, fig. 7 (b) shows the imaging result of incomplete data added with 2D phase error; as can be seen from fig. 7 (b), the imaging result is blurred and difficult to recognize.
Also, 2D-ADNet has the worst imaging performance, especially at low sampling rates, resulting in images with a large noise background; especially when 2DSPR is (0.4 ), FCNN loses most of the target; as the sampling rate increases, most scattering points are preserved; however, compared with the label image, the image obtained by the FCNN method can be found to enhance weak scattering points, so that the contrast of the reconstruction result is higher; that is why FCNN has the smallest SSIM value (as shown in table 3); referring to fig. 8, both 2D-IADIA and 2D-IADIANet can obtain fine imaging results with little noise background. In particular in the case of 2DSPR of (0.4 ), the 2D-IADIANet still obtained satisfactory results, indicating that it has good generalization properties for the measured data.
TABLE 3 evaluation index of imaging results of YAK-42 aircraft under different 2 DSPRSs
When the sampling rate is fixed (0.7), the image results obtained by four different methods are shown in fig. 9, with SNR set to 10, 5, and 0dB, respectively.
TABLE 4 comparison of YAK-42 aircraft imaging results evaluation index under different snr conditions
Example 4
Referring to fig. 10 and 11, fig. 10 is a diagram of Boeing-727 data imaging results obtained by using the RD method according to the embodiment of the invention, and fig. 11 is a diagram of Boeing 727 aircraft imaging results obtained by using different methods according to the embodiment of the invention under different 2dsp rs conditions, and in order to further verify the effectiveness of the proposed network, experiments are performed using another set of Boeing-727 measured data; the data set is obtained by a step frequency radar system, and the bandwidth is 150MHz; it has 128 bursts and a burst has 128 sub-pulses; the original data is subjected to distance alignment and phase focusing, and the original signal-to-noise ratio is 7.12dB; please refer to fig. 10, which shows the complete data imaging result of the RD method; it can be seen that existing 2D phase errors can lead to blurred imaging results and spurious scattering points.
Similarly, the imaging results for four different methods under different 2DSPRs conditions are shown in FIG. 11. Furthermore, because of the low signal-to-noise ratio of the boeing 727 data, we cannot obtain an accurate target image. Therefore, the RMSE, SSIM and PSNR indices used are not applicable to the measurement data. Therefore, image Entropy (IE) and Image Contrast (IC) are used to evaluate imaging quality, and the corresponding evaluation indices are listed in table 5.
TABLE 5 Boeing 727 aircraft imaging Performance evaluation index under different 2DSPRs conditions
For imaging results based on Boeing-727 data, 2 dadnaet showed the worst imaging performance under all three sampling conditions; because of its weak phase error compensation capability, the restored image is surrounded by a large number of dummy points; FCNN achieved a cleaner background image compared to 2 dadnaet; although the imaging results have a lower IE and higher IC value, only strong scattering points remain and many weak scattering points are lost, which is detrimental to target identification; in comparison to the other two methods, 2D-IADIA and 2 diadimet can achieve similar and optimal imaging results under all conditions. Similar to the analog data and the Yak-42 data, 2 diadinet can reduce the time consumption in advance,
in addition, the 2D-IADIANet can further improve the calculation efficiency due to the fact that the self-learning capacity of parameters and the number of layers are small, and the effectiveness of the proposed network is verified.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.

Claims (8)

1. A two-dimensional joint imaging and self-focusing method based on a deep learning network, comprising:
step S1, a two-dimensional downsampling echo model containing two-dimensional phase errors is established;
s2, matrixing the two-dimensional downsampled echo model 1 The norm optimization, including,
step S21, fixing an error matrix E in the two-dimensional downsampled echo model and carrying out l on a sparse scene matrix X in the two-dimensional downsampled echo model 1 Optimizing norms;
step S22, fixing the sparse scene matrix X, and carrying out l on the error matrix 1 Optimizing norms;
step S23, through alternate iterative optimization of S21 and S22, an optimized sparse scene matrix X and an optimized error matrix E are finally obtained;
step S3, expanding each sub-step in the step S2 into a neural network structure;
and S4, inputting a simulation data set into the neural network structure to perform network training, obtaining optimal parameters of a model, and imaging measured data based on the network model trained by the simulation data.
2. The two-dimensional joint imaging and self-focusing method based on a deep learning network according to claim 1, wherein in step S1, the two-dimensional downsampled echo model is represented by formula (1),
in the formula (1), the components are as follows,y represents a two-dimensional sparse echo data matrix, F r Representing a distance-wise random downsampling matrix, +.>Represents an azimuthal random downsampling matrix, H' r Dictionary matrix representing distance direction, H' a Dictionary matrix representing azimuth, W representing noise matrix, E' representing error matrix, X representing sparse scene matrix, E representing Hadamard product, Φ r =F r H′ r ,Φ a =F a H′ a ,/>
3. The method of two-dimensional joint imaging and self-focusing based on deep learning network according to claim 2, wherein in step S2, further comprising vectorizing the two-dimensional downsampled echo model, wherein,
vectorization of the vectorized two-dimensional downsampled echo model is represented by (3),
in the formula (3), the amino acid sequence of the compound,for Kronecker product, x represents a vectorized sparse scene matrix, E represents a vectorized matrix of matrix E, w represents a sparsified noise matrix, +.>
4. The two-dimensional joint imaging and self-focusing method based on deep learning network according to claim 3, wherein in said step S2, said two-dimensional downsampled echo model is matrixed l 1 Norm optimization, wherein a norm optimization function is obtained,
in the formula (4), a is a Lagrangian multiplier, d is a penalty parameter, v is an auxiliary variable, λ is a first regularization parameter, and δ is a second regularization parameter.
5. The two-dimensional joint imaging and self-focusing method based on the deep learning network according to claim 1, wherein in the step S21, the sparse scene matrix X in the two-dimensional downsampled echo model is subjected to l 1 The norm optimization, denoted as,
in the formula (5), the amino acid sequence of the compound,z (·) represents a contraction function, X p+1 Represents a sparse matrix after the optimization of the p+1st cycle, A p+1 Representing a Lagrangian multiplier matrix after the p+1st cycle optimization, the Lagrangian multiplier matrix being obtained based on matrixing of Lagrangian parameters, V p+1 Represents an auxiliary variable matrix after the p+1st cycle optimization, which is obtained based on the auxiliary variable matrixing, u=Σ (E H )⊙vec(Y),X(E H ) Representing the vector e H Conversion to matrix E H Diagonal element of (a) (·) H Represents conjugate transpose (X) * Representing a convolution operation.
6. The two-dimensional joint imaging and self-focusing method based on deep learning network according to claim 1, wherein in said step S22, the error matrix is subjected to l 1 The norm optimization, denoted as,
in the formula (6), Y m,h Represents the mth column of the m-th row of the two-dimensional sparse echo data matrix, p represents the p-th cycle,representing two-dimensional phase error in the p-th cyclic echo signal, (g) m,g Represents row m (g) of the matrix g,h Representing the h column of the matrix.
7. The two-dimensional joint imaging and self-focusing method based on deep learning network according to claim 1, wherein in the step S3, the algorithm implementation steps in the step S21 and the step S22 are expanded into a deep learning network result, and the structure of a single neural network layer in the neural network structure mainly comprises: an X module for realizing the update of the sparse scene matrix X in step S21 and step S22, a Z module for realizing the update of the auxiliary variable matrix V in step S21 and step S22, an a module for realizing the update of the lagrangian matrix a in step S21 and step S22, and an E module for realizing the update of the phase error matrix E in step S21 and step S22.
8. The two-dimensional joint imaging and self-focusing method based on deep learning network according to claim 1, wherein in said step S4, the network parameters are trained by inputting a simulation data set, the best optimization parameters are determined by a loss function, expressed as,
in the formula (7), Q is the number of training data in the simulation training set,representing the reconstruction result of the q-th data, X li For label imaging results, Θ= { l l ,d l And } represents the parameters that need to be optimized.
CN202310698121.7A 2023-06-13 2023-06-13 Two-dimensional joint imaging and self-focusing method based on deep learning network Pending CN117148347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310698121.7A CN117148347A (en) 2023-06-13 2023-06-13 Two-dimensional joint imaging and self-focusing method based on deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310698121.7A CN117148347A (en) 2023-06-13 2023-06-13 Two-dimensional joint imaging and self-focusing method based on deep learning network

Publications (1)

Publication Number Publication Date
CN117148347A true CN117148347A (en) 2023-12-01

Family

ID=88885642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310698121.7A Pending CN117148347A (en) 2023-06-13 2023-06-13 Two-dimensional joint imaging and self-focusing method based on deep learning network

Country Status (1)

Country Link
CN (1) CN117148347A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109683161A (en) * 2018-12-20 2019-04-26 南京航空航天大学 A method of the inverse synthetic aperture radar imaging based on depth ADMM network
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning
CN110275166A (en) * 2019-07-12 2019-09-24 中国人民解放军国防科技大学 ADMM-based rapid sparse aperture ISAR self-focusing and imaging method
CN112099008A (en) * 2020-09-16 2020-12-18 中国人民解放军国防科技大学 SA-ISAR imaging and self-focusing method based on CV-ADMMN
CN115453531A (en) * 2022-08-23 2022-12-09 中国人民解放军空军预警学院雷达士官学校 Two-dimensional sparse ISAR imaging method based on weighting matrix filling
US20230024401A1 (en) * 2021-06-14 2023-01-26 The Board Of Trustees Of The Leland Stanford Junior University Implicit Neural Representation Learning with Prior Embedding for Sparsely Sampled Image Reconstruction and Other Inverse Problems
CN116243313A (en) * 2023-03-08 2023-06-09 北京理工大学 SAR rapid intelligent sparse self-focusing technology based on distance partition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning
CN109683161A (en) * 2018-12-20 2019-04-26 南京航空航天大学 A method of the inverse synthetic aperture radar imaging based on depth ADMM network
CN110275166A (en) * 2019-07-12 2019-09-24 中国人民解放军国防科技大学 ADMM-based rapid sparse aperture ISAR self-focusing and imaging method
CN112099008A (en) * 2020-09-16 2020-12-18 中国人民解放军国防科技大学 SA-ISAR imaging and self-focusing method based on CV-ADMMN
US20230024401A1 (en) * 2021-06-14 2023-01-26 The Board Of Trustees Of The Leland Stanford Junior University Implicit Neural Representation Learning with Prior Embedding for Sparsely Sampled Image Reconstruction and Other Inverse Problems
CN115453531A (en) * 2022-08-23 2022-12-09 中国人民解放军空军预警学院雷达士官学校 Two-dimensional sparse ISAR imaging method based on weighting matrix filling
CN116243313A (en) * 2023-03-08 2023-06-09 北京理工大学 SAR rapid intelligent sparse self-focusing technology based on distance partition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAMID REZA HASHEMPOUR: "Sparsity-Driven ISAR Imaging Based on Two-Dimensional ADMM", IEEE SENSORS JOURNAL, 15 November 2020 (2020-11-15) *
MINGJIU LV: "Joint random stepped frequency ISAR imaging and autofocusing based on 2D alternating direction method of multipliers", SIGNAL PROCESSING, 31 December 2022 (2022-12-31), pages 1 - 4 *
李瑞泽: "基于卷积ADMM网络的高效结构化稀疏ISAR成像方法", 系统工程与电子技术, 31 January 2023 (2023-01-31) *

Similar Documents

Publication Publication Date Title
CN110070025B (en) Monocular image-based three-dimensional target detection system and method
CN109917361B (en) Three-dimensional unknown scene imaging method based on bistatic radar
CN110208796B (en) Scanning radar super-resolution imaging method based on singular value inverse filtering
CN103543451B (en) A kind of multipath virtual image based on compressed sensing suppresses SAR post-processing approach
CN109884625B (en) Radar correlation imaging method based on convolutional neural network
CN110146881A (en) A kind of scanning radar super-resolution imaging method based on improvement total variation
CN114442092B (en) SAR deep learning three-dimensional imaging method for distributed unmanned aerial vehicle
Smith et al. A vision transformer approach for efficient near-field SAR super-resolution under array perturbation
CN108562898B (en) Distance and direction two-dimensional space-variant self-focusing method of front-side-looking SAR
An et al. LRSR-ADMM-Net: A joint low-rank and sparse recovery network for SAR imaging
CN117148347A (en) Two-dimensional joint imaging and self-focusing method based on deep learning network
El-Ashkar et al. Compressed sensing for SAR image reconstruction
CN113222860A (en) Image recovery method and system based on noise structure multiple regularization
CN114895305B (en) L-based 1 Norm regularized sparse SAR self-focusing imaging method and device
CN113640794B (en) MIMO-SAR three-dimensional imaging self-focusing method
Yu et al. SAR image super-resolution base on weighted dense connected convolutional network
CN115453531A (en) Two-dimensional sparse ISAR imaging method based on weighting matrix filling
CN110780273B (en) Hybrid regularization azimuth super-resolution imaging method
Liang et al. Robust and efficient isar autofocusing based on deep convolution network
Murtada et al. Efficient radar imaging using partially synchronized distributed sensors
Feng et al. Near range radar imaging by SFCW linear sparse array based on block sparsity
Mansour et al. Distributed Radar Autofocus Imaging Using Deep Priors
Li et al. Super-resolution imaging of real-beam scanning radar base on accelerated maximum a posteriori algorithm
Wei et al. Efficient Autofocus for 3-D SAR Sparse Imaging Based on Joint Criterion Optimization
CN114624646B (en) DOA estimation method based on model driven complex neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination