CN114325707A - Sparse aperture micro-motion target ISAR imaging method based on depth expansion network - Google Patents
Sparse aperture micro-motion target ISAR imaging method based on depth expansion network Download PDFInfo
- Publication number
- CN114325707A CN114325707A CN202210013146.4A CN202210013146A CN114325707A CN 114325707 A CN114325707 A CN 114325707A CN 202210013146 A CN202210013146 A CN 202210013146A CN 114325707 A CN114325707 A CN 114325707A
- Authority
- CN
- China
- Prior art keywords
- target
- algorithm
- micro
- representing
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention relates to the field of radar imaging, and discloses a sparse aperture micro-motion target ISAR imaging method based on a depth expansion network, which comprises the steps of obtaining radar echo data, and preprocessing and modeling according to the echo data; establishing a model-driven deep learning imaging algorithm by using a deep expansion frame; constructing a loss function corresponding to the deep learning imaging algorithm and selecting a training data set; carrying out initialization setting on network parameters, and setting a parameter updating optimization algorithm and the number of rounds of training; updating and adjusting the parameters through an optimization algorithm; when the training round number is met, the optimal network parameters are stored to output an optimal algorithm model, and the efficiency of the radar in imaging the micro-motion target under the sparse aperture condition is improved.
Description
Technical Field
The application relates to the field of radar imaging, in particular to a sparse aperture micro-motion target ISAR imaging method based on a depth expansion network.
Background
An Inverse Synthetic Aperture Radar (ISAR) is a special Radar which realizes a target high-resolution electromagnetic scattering image by using Doppler information generated when a target and the Radar move relatively, and is an important technical approach for Radar target identification; compared with optical imaging and other methods, ISAR imaging has the characteristic of all weather all the day; the method is widely applied to space target monitoring, radar target identification and the like.
Traditional ISAR imaging looks at objects as rigid motion, such as smooth motion of airplanes and ships, however, some moving objects with complex structures may carry micro-motion parts. Micro-motion refers to small-amplitude motion of an object other than motion of a center of mass, for example, rotation of an airplane propeller, rotation of an antenna on a ship, and the like, and these structures generating the micro-motion effect are called micro-motion components. Generally, the traditional imaging algorithm can achieve better imaging on a stably moving target under a full aperture condition, but due to the existence of a micro-motion component, the target can generate a micro-doppler effect, and the ISAR image quality is affected. In addition, a radar echo part is lost due to radio wave propagation environment, target switching of a multi-channel radar, and the like, and a sparse aperture echo is generated. Under the condition of sparse aperture, fast Fourier transform is limited, the interference of side lobes and grating lobes of a formed image is high, and the traditional algorithm cannot image the formed image well. Therefore, how to image the target with the micro-motion component under the sparse aperture condition is a scientific problem to be solved urgently.
Aiming at the problem of influence of component micromotion on ISAR imaging, a commonly adopted method is to separate a micromotion signal from a main body part signal and then independently process the main body signal so as to remove the micromotion effect. Common methods are hough transform inverse, radon transform, etc. In recent years, wavelet decomposition, empirical mode decomposition, chirp wavelet transform (chirp) and the like, which are new methods, have shown good effects on the separation of the jogging signals. However, these methods have limited resolution of the images obtained and low algorithm stability. Aiming at the problem of sparsity caused by low radar data rate, a commonly adopted method is to realize image recovery in a sparse mode by utilizing a compressed sensing model. The common compressed sensing methods mainly have three types: greedy algorithm, convex optimization method and sparse Bayesian learning. Unfortunately, these methods cannot eliminate the micro-motion signal well, and therefore, how to implement ISAR imaging on the micro-motion target under sparse conditions is a major research point in the field of radar imaging at present.
At present, although some scholars use the micro-motion signal separation method in the field of ISAR imaging, good target imaging is difficult to perform in an actual environment due to the fact that the anti-interference capability and stability of the algorithm are not high. In addition, some scholars have intensively studied ISAR imaging under sparse aperture conditions, but mainly aim at rigid body moving targets without micro-motion components. In recent years, some scholars have begun to make relevant studies on the micro-motion signal under sparse conditions. For example, in 2014 of air force engineering university chenyijun et al, a new imaging method is provided on the basis of an orthogonal matching tracking algorithm, and micro-motion target ISAR imaging under a sparse aperture condition is realized; the team provides an imaging algorithm (Linear-Alternating orientation Method of Multipliers, L-ADMM) based on low-rank and sparse joint constraint in 2020, and the Method can rapidly and stably obtain the micro-motion target ISAR image under the sparse aperture condition. However, when the method is used for solving the main body partial image, iterative solution is required, the calculation efficiency is relatively low, complicated manual parameter adjustment is required before use, and the method is low in applicability to ISAR imaging in different scenes. In order to solve the problems of low efficiency and manual parameter adjustment of a convex optimization method, a deep learning method based on compressed sensing model driving is provided in 2020 by Yangyi of the university of Sian traffic, and the like, so that an original image can be rapidly and accurately restored, and the method is widely applied to medical imaging. Deep learning is a neural network model comprising a multilayer perceptron, and approaches a complex functional relation by designing a certain number of network layers. In theory, deep learning can approximate any function as long as the number of network layers is set sufficiently. In recent years, many scholars have made relevant research on image restoration under a compressed sensing model by using a deep expansion network, and the progress is better. These studies provide us a good reference for applying deep learning to the field of ISAR imaging.
Therefore, how to solve the problem that the micro-motion target image is difficult to obtain under the sparse aperture condition becomes a technical problem to be solved urgently to improve the efficiency of the radar for imaging the micro-motion target under the sparse aperture condition.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a sparse aperture micro-motion target ISAR imaging method based on a depth expansion network, and aims to solve the technical problem of improving the imaging efficiency of a radar on a micro-motion target under a sparse aperture condition.
In order to achieve the above object, the present invention provides a sparse aperture micro-motion target ISAR imaging method based on a depth expansion network, the method comprising:
radar echo data are obtained, and preprocessing and modeling are carried out according to the echo data;
establishing a model-driven deep learning imaging algorithm by using a deep expansion frame;
constructing a loss function corresponding to the deep learning imaging algorithm and selecting a training data set;
carrying out initialization setting on network parameters, and setting a parameter updating optimization algorithm and the number of rounds of training;
updating and adjusting the parameters through an optimization algorithm;
and when the training round number is met, saving the optimal network parameters to output an optimal algorithm model.
Optionally, the step of acquiring radar echo data, and performing preprocessing and modeling according to the echo data includes:
acquiring radar echo data, and performing translation compensation on a one-dimensional range profile of the radar echo data, wherein the translation compensation comprises the following steps: envelope alignment and autofocusing;
the one-dimensional distance image formula after translational compensation is as follows:
wherein the content of the first and second substances,representing a translation compensated one-dimensional range profile sequence of the target,tmrespectively representing a fast time and a slow time, wherein M is 1,2, L, M and M represent the number of pulses contained in a full-aperture radar echo, fcB, c respectively represent the center frequency, bandwidth and propagation velocity of the radar signal, σpAnd Rp(tm) Respectively representing the reflection coefficient of the p-th scattering center of the target body part and the instantaneous rotation distance, sigma, of the relative radarqAnd Rq(tm) Respectively representing the reflection coefficient of the qth scattering center of the target micro-motion component and the instantaneous rotation distance relative to the radar, wherein P is 1,2, L, P, Q is 1,2, L, Q, P represents that the target main body part contains P scattering centers, and Q represents that the target micro-motion component contains Q scattering centers;
and modeling according to the one-dimensional distance image formula.
Optionally, the step of modeling according to the one-dimensional range profile formula includes:
under sparse aperture conditions, the one-dimensional range profile formula can be expressed in the form of the following matrix:
H=L+S
whereinAndrespectively representing a one-dimensional range image sequence of the target, the target main body part and the target micro-motion component,a complex matrix representing K' N, K and N representing pulses of a sparse aperture one-dimensional range profile sequence, respectivelyThe number and the number of the distance units; for sparse aperture data, the number of pulses is less than that contained in full aperture radar echo, namely K<M, and the set of pulse sequence numbers is a subset of full aperture pulse sequence numbers, i.e.Wherein i represents a sparse aperture pulse sequence number set;
for the belt micro-motion component, the ISAR image of the target main body part and the one-dimensional range profile sequence L of the target main body part are mutually Fourier transform pairs, namely:
L=PX
whereinAn ISAR image representing a subject body portion of the subject,representing a partial Fourier matrix, assuming a complete Fourier matrix ofP is the sequence number set of the extracted row vectors formed by extracting part of the row vectors in the X and combining the row vectors into a sparse aperture pulse sequence number set i;
adding constraint conditions: firstly, the column correlation of a one-dimensional range profile sequence L of a target main body part is strong, and the low-rank characteristic is achieved; secondly, energy of the one-dimensional range profile sequence S of the target micro-motion component is distributed in different range units, and the target micro-motion component has a sparse characteristic; thirdly, the ISAR image of the target main body generally consists of a few scattering centers and has strong sparse characteristic; the modeling can be as follows:
min||L||*+λ||S||1+μ||X||1
s.t.H=L+S
L=PX
wherein | · | purple*(ii) counting & lt | & gt | & lt | & gt1Respectively representing the kernel norm and l of the matrix1Norm, which is respectively used for representing the rank and the sparsity of the matrix; λ and μ represent regularization parameters used for adjusting matrix decomposition and ISA, respectivelyR imaging weight, λ ═ μ ═ 0.5;
modeling the triple constraint underdetermined problem to finally obtain the following model:
and (3) solving by adopting a linear alternating direction multiplier algorithm to obtain:
Y1 (k+1)=Y1 (k)+ρ1 (k)(H-L(k+1)-S(k+1))
Y2 (k+1)=Y2 (k)+ρ2(k)(L(k+1)-PX(k+1))
ρ1 (k+1)=ηρ1 (k)
ρ2 (k+1)=ηρ2 (k)
wherein<·,·>Representing the inner product of two matrices, Y1、Y2Representing the Lagrange multiplier matrix, p1、ρ2Represents a penalty factor, | · |. non-woven phosphorFAn F norm representing a matrix; (.)(k)Representing the variable obtained by the kth iteration, and eta representing a rising factor for controlling the penalty factor rho1、ρ2The rising tendency of (a) to (b),wherein, PHRepresents the conjugate transpose of the partial fourier matrix P;the singular value contraction factor is expressed, and specifically, for an arbitrary matrix a and an arbitrary scalar γ, there are:
wherein A ═ Udiag (sigma) VHRepresents A; singular value decomposition, U, V is unitary matrix, sigma represents singular value vector of A, and diag (-) represents diagonal matrix formed by vector;representing soft-threshold operators, for arbitrary scalars x, γ, havingWherein sgn (·) represents a sign operator; for an arbitrary vector x, there areWherein xnRepresenting the nth element of the vector x.
Until the relative error (| X) of the ISAR image of the target main body part obtained by two adjacent iterations(k+1)-X(k)|/|X(k)And |)) is smaller than a set threshold, the ISAR image X of the target main body part with the micro-motion component under the condition of sparse aperture can be obtained.
Optionally, the step of building a model-driven deep learning imaging algorithm by using a depth expansion framework includes:
developing a solution algorithm-L-ADMM algorithm of the micro-motion target ISAR image under the sparse aperture condition into a cascade network;
setting the iteration number of the L-ADMM algorithm as n, and setting the layer number of the cascading network as n;
each layer network comprises 4 unknown parameters which are lambda, mu, eta and tau respectively, and the L-ADMM algorithm sets the unknown parameters when in use;
the initialization setting of the parameters of each layer of the network is consistent with the initial value of the L-ADMM algorithm, so that in the training stage, the L-ADMM-net continuously updates the parameters of each layer according to the loss function, and in the testing stage, the network parameters are directly imaged.
Optionally, the step of constructing a loss function corresponding to the deep learning imaging algorithm and selecting a training data set includes:
constructing a loss function corresponding to the deep learning imaging algorithm;
setting a one-dimensional range profile H of the micro-motion target under the sparse aperture condition as an input end of training set data, setting an output end of the one-dimensional range profile H as an ISAR image X of a main body part under the sparse aperture condition, and setting label data as an ISAR image X of the main body part under full aperture echo.
Optionally, the step of constructing a corresponding loss function of the deep learning imaging algorithm includes:
the following loss function was constructed:
where N represents the total number contained in the training data set,the F-norm of the matrix is represented,representing the ISAR image of the target main body part obtained by calculating the target one-dimensional range profile H under the condition of sparse aperture through a neural network,representing the main part ISAR image under full aperture echo, gamma represents the regularization parameter,a norm of a vector is represented,indicating that the matrix is stretched into a one-dimensional vector in the order of columns.
Optionally, after the step of setting the one-dimensional range profile H of the micro-motion target under the sparse aperture condition as the input end of the training set data, setting the output end of the micro-motion target under the sparse aperture condition as the ISAR image X of the main body part under the sparse aperture condition, and setting the label data as the ISAR image X of the main body part under the full aperture echo, the method further includes:
setting a preset number of network layers, and automatically adjusting parameters through learning training to obtain a mapping model between a one-dimensional range profile H of a target under a sparse aperture and a main body portion ISAR image X.
Optionally, the step of performing initialization setting on the network parameters, and setting the parameter updating optimization algorithm and the number of training rounds includes:
updating and optimizing the parameters by adopting an Adam algorithm, wherein the principle of the Adam algorithm is as follows:
wherein m istFor first order estimation of the gradient, beta1Is a hyperparameter, vtFor second order estimation of the gradient, beta2Is a hyper-parameter, xi is a loss function, and theta is the solved parameter;
after the deviation correction, the following results are obtained:
wherein the content of the first and second substances,the corrected first order estimation value and the second order estimation value of the gradient are obtained;
in addition, the specific update formula of the neural network parameters is as follows:
where t is the number of iterations, θt+1Is the value of the parameter at time t + 1, θtAnd alpha is the learning rate of the neural network model, and epsilon is an error constant.
According to the method, radar echo data are obtained, and preprocessing and modeling are performed according to the echo data; establishing a model-driven deep learning imaging algorithm by using a deep expansion frame; constructing a loss function corresponding to the deep learning imaging algorithm and selecting a training data set; carrying out initialization setting on network parameters, and setting a parameter updating optimization algorithm and the number of rounds of training; updating and adjusting the parameters through an optimization algorithm; when the training round number is met, the optimal network parameters are stored to output an optimal algorithm model, and the efficiency of the radar in imaging the micro-motion target under the sparse aperture condition is improved.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of a sparse aperture micro-motion target ISAR imaging method based on a depth expansion network according to the present invention;
FIG. 2 is a network expansion diagram of the L-ADMM-net algorithm of the first embodiment of the sparse aperture micro-motion target ISAR imaging method based on the depth expansion network of the present invention;
FIG. 3 is a diagram of the ISAR imaging method of sparse aperture micro-motion targets based on the depth expansion network in the first embodiment of the invention, wherein (a) is the airplane used for testing, namely a Seiner airplane, and (b) is the result of imaging the targets by using an R-D algorithm under the full aperture condition;
FIG. 4 is a target ISAR image obtained by different algorithms under random sparse conditions with sparsity of 50% according to a first embodiment of the sparse aperture micro-motion target ISAR imaging method based on a depth expansion network of the present invention, wherein (a) R-D; (b) a Chirplet; (c) L-ADMM; (c) L-ADMM-net;
FIG. 5 is a target ISAR image obtained by different algorithms under the condition of 25% random sparsity according to the first embodiment of the sparse aperture micro-motion target ISAR imaging method based on the depth expansion network, wherein (a) R-D; (b) a Chirplet; (c) L-ADMM; (c) L-ADMM-net;
FIG. 6 is a target ISAR image obtained by different algorithms under the condition of random sparsity of 12.5% according to the first embodiment of the sparse aperture micro-motion target ISAR imaging method based on the depth expansion network, wherein (a) R-D; (b) a Chirplet; (c) L-ADMM; (c) L-ADMM-net.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a sparse aperture micro-motion target ISAR imaging method based on a depth expansion network, and referring to FIG. 1, FIG. 1 is a schematic flow diagram of a first embodiment of the sparse aperture micro-motion target ISAR imaging method based on the depth expansion network.
Step S10: and acquiring radar echo data, and performing preprocessing and modeling according to the echo data.
In a specific implementation, before Inverse Synthetic Aperture Radar (ISAR) imaging, translation compensation needs to be performed on a one-dimensional range profile of Radar echo data, where the translation compensation mainly includes 2 steps, namely, envelope alignment and self-focusing. At present, the technology of the translation compensation part is relatively mature, so that the translation compensation is performed on the one-dimensional distance image before the model is constructed, and the one-dimensional distance image after the translation compensation is as follows:
wherein the content of the first and second substances,after compensating for translationIs generated from the target one-dimensional range image sequence,tmrespectively representing a fast time and a slow time, wherein M is 1,2, L, M and M represent the number of pulses contained in a full-aperture radar echo, fcB, c respectively represent the center frequency, bandwidth and propagation velocity of the radar signal, σpAnd Rp(tm) Respectively representing the reflection coefficient of the p-th scattering center of the target body part and the instantaneous rotation distance, sigma, of the relative radarqAnd Rq(tm) Respectively representing the reflection coefficient of the qth scattering center of the target micro-motion component and the instantaneous rotation distance relative to the radar, wherein P is 1,2, L, P, Q is 1,2, L, Q, P represents that the target main body part contains P scattering centers, and Q represents that the target micro-motion component contains Q scattering centers; for the p-th scattering center of the target body part, the instantaneous rotation distance R relative to the radarp(tm) Can be expressed as:
Rp(tm)=xpsin(ωtm)+ypcos(ωtm)≈xpωtm+yp
wherein (x)p,yp) Representing the coordinate of the p-th scattering center of the target main body part in a target specimen body coordinate system, and omega represents the rotating angular speed of the target main body part; because the ISAR imaging accumulation time is short, the rotation angle omega of the target relative to the radar in the imaging accumulation time is small, and therefore: sin (ω t)m)≈ωtm、cos(ωtm) 1 is approximately distributed; suppose that the scattering center of the target micro-motion component is around the point O' (x)O',yO') Rotating, and then for the qth scattering center of the target micro-motion component, the instantaneous rotating distance R of the target micro-motion component relative to the radarq(tm) Can be expressed as:
Rq(tm)=xO'sin(ωtm)+yO'cos(ωtm)+rqcos(ω'tm+θq)
≈xO'ωtm+yO'+rqcos(ω'tm+θq)
wherein,(xO',yO') Represents the coordinate of the qth scattering center of the target micro-motion component in the target specimen coordinate system, rqω', and θqRespectively representing the micromotion amplitude, the rotation angular speed and the initial phase of the qth scattering center of the target micromotion component. Comparing the difference between the target body and the micro-motion component, the instantaneous rotation distance R of the qth scattering center of the micro-motion component relative to the radar is knownq(tm) Containing the cosine term rqcos(ω'tm+θq) During ISAR imaging, the term will generate Micro Doppler (m-D) effect interference, which affects the ISAR image quality.
Further, the steps of acquiring radar echo data, preprocessing according to the echo data and modeling include: acquiring radar echo data, and performing translation compensation on a one-dimensional range profile of the radar echo data, wherein the translation compensation comprises the following steps: envelope alignment and autofocusing; the one-dimensional distance image formula after translational compensation is as follows:
wherein the content of the first and second substances,representing a translation compensated one-dimensional range profile sequence of the target,tmrespectively representing a fast time and a slow time, wherein M is 1,2, L, M and M represent the number of pulses contained in a full-aperture radar echo, fcB, c respectively represent the center frequency, bandwidth and propagation velocity of the radar signal, σpAnd Rp(tm) Respectively representing the reflection coefficient of the p-th scattering center of the target body part and the instantaneous rotation distance, sigma, of the relative radarqAnd Rq(tm) Respectively representing the reflection coefficient of the qth scattering center of the target micro-motion component and the instantaneous rotation distance relative to the radar, wherein P is 1,2, L, P, Q is 1,2, L, Q, and P represents that the target main body part contains P scattering centersA center, Q indicates that the target micro-motion component contains Q scattering centers; and modeling according to the one-dimensional distance image formula.
Further, the step of modeling according to the one-dimensional range profile formula includes: under sparse aperture conditions, the one-dimensional range profile formula can be expressed in the form of the following matrix:
H=L+S
whereinAndrespectively representing a one-dimensional range image sequence of the target, the target main body part and the target micro-motion component,expressing a complex matrix of K' N, wherein K and N respectively express the number of pulses and the number of distance units of the sparse aperture one-dimensional distance image sequence; for sparse aperture data, the number of pulses is less than that contained in full aperture radar echo, namely K<M, and the set of pulse sequence numbers is a subset of full aperture pulse sequence numbers, i.e.Wherein i represents a sparse aperture pulse sequence number set; for the belt micro-motion component, the ISAR image of the target main body part and the one-dimensional range profile sequence L of the target main body part are mutually Fourier transform pairs, namely:
L=PX
whereinAn ISAR image representing a subject body portion of the subject,representing a partial Fourier matrix, assuming a complete Fourier matrix ofP is the sequence number set of the extracted row vectors formed by extracting part of the row vectors in the X and combining the row vectors into a sparse aperture pulse sequence number set i;
adding constraint conditions: firstly, the column correlation of a one-dimensional range profile sequence L of a target main body part is strong, and the low-rank characteristic is achieved; secondly, energy of the one-dimensional range profile sequence S of the target micro-motion component is distributed in different range units, and the target micro-motion component has a sparse characteristic; thirdly, the ISAR image of the target main body generally consists of a few scattering centers and has strong sparse characteristic; the modeling can be as follows:
min||L||*+λ||S||1+μ||X||1
s.t.H=L+S
L=PX
wherein | · | purple*(ii) counting & lt | & gt | & lt | & gt1Respectively representing the kernel norm and l of the matrix1Norm, which is respectively used for representing the rank and the sparsity of the matrix; λ and μ represent regularization parameters, which are used for adjusting weights of matrix decomposition and ISAR imaging, respectively, and λ ═ μ ═ 0.5;
modeling the triple constraint underdetermined problem to finally obtain the following model:
and (3) solving by adopting a linear alternating direction multiplier algorithm to obtain:
Y1 (k+1)=Y1 (k)+ρ1 (k)(H-L(k+1)-S(k+1))
Y2 (k+1)=Y2 (k)+ρ2 (k)(L(k+1)-PX(k+1))
ρ1 (k+1)=ηρ1 (k)
ρ2 (k+1)=ηρ2 (k)
wherein<·,·>Representing the inner product of two matrices, Y1、Y2Representing the Lagrange multiplier matrix, p1、ρ2Represents a penalty factor, | · |. non-woven phosphorFAn F norm representing a matrix; (.)(k)Representing the variable obtained by the kth iteration, and eta representing a rising factor for controlling the penalty factor rho1、ρ2The rising tendency of (a) to (b),wherein, PHRepresents the conjugate transpose of the partial fourier matrix P;the singular value contraction factor is expressed, and specifically, for an arbitrary matrix a and an arbitrary scalar γ, there are:
wherein A ═ Udiag (sigma) VHRepresents A; singular value decomposition, U, V is unitary matrix, sigma represents singular value vector of A, and diag (-) represents diagonal matrix formed by vector;representing soft-threshold operators, for arbitrary scalars x, γ, havingWherein sgn (·) represents a sign operator; for an arbitrary vector x, there areWherein xnRepresenting the nth element of the vector x.
Until the relative error (| X) of the ISAR image of the target main body part obtained by two adjacent iterations(k+1)-X(k)|/|X(k)And |)) is smaller than a set threshold, the ISAR image X of the target main body part with the micro-motion component under the condition of sparse aperture can be obtained.
Step S20: and establishing a model-driven deep learning imaging algorithm by using a deep expansion framework.
It should be noted that deep learning is a new emerging direction in machine learning, and since the deep learning is proposed, it has opened a revolutionary race in the field of artificial intelligence, and is widely applied to image recognition, natural language processing, multimedia vision, and other aspects. The essence of deep learning is the extension of artificial neural network structure, and in general, a sensor comprising a plurality of hidden layers can be regarded as a framework of deep learning. Deep learning forms a deep network that can map more complex and abstract structural features by combining a certain number of hidden layer network structures, so that some more complex information and features can be processed. However, deep learning also encounters some "challenges" while in use. For example, deep learning often can only judge the quality of the model effect from the output result, and the specific principle inside the deep learning cannot be known; therefore, some scholars refer to them as "black boxes". In recent years, some learners combine solution models of some scientific problems with deep learning to construct an interpretable model-driven deep neural network, so that a black box is changed into a mathematical model with practical significance. According to the idea of deep network expansion, aiming at the problem of micro-motion target imaging under the sparse aperture condition, the method expands the solution algorithm-L-ADMM algorithm of the micro-motion target ISAR image under the sparse aperture condition into a cascade network, constructs a new deep learning network-L-ADMM-net with interpretability, improves the imaging precision, the operation efficiency and the stability, and provides a new solution idea for the micro-motion target ISAR imaging under the sparse aperture condition in the future.
Further, the step of establishing a model-driven deep learning imaging algorithm by using a depth expansion framework includes: developing a solution algorithm-L-ADMM algorithm of the micro-motion target ISAR image under the sparse aperture condition into a cascade network; setting the iteration number of the L-ADMM algorithm as n, and setting the layer number of the cascading network as n; each layer network comprises 4 unknown parameters which are lambda, mu, eta and tau respectively, and the L-ADMM algorithm sets the unknown parameters when in use; the initialization setting of the parameters of each layer of the network is consistent with the initial value of the L-ADMM algorithm, so that in the training stage, the L-ADMM-net continuously updates the parameters of each layer according to the loss function, and in the testing stage, the network parameters are directly imaged.
In a specific implementation, as shown in fig. 2, each hierarchical network shown in fig. 2 corresponds to one iteration step in the L-ADMM algorithm, with variables in each layer of the network according to the following formula:
Y1 (k+1)=Y1 (k)+ρ1 (k)(H-L(k+1)-S(k+1))
Y2 (k+1)=Y2 (k)+ρ2 (k)(L(k+1)-PX(k+1))
ρ1 (k+1)=ηρ1 (k)
ρ2 (k+1)=ηρ2 (k)
and circularly transmitting to the next layer of network and finally transmitting to the output layer. And setting the iteration number of the L-ADMM algorithm as n, and setting the layer number of the cascaded network as n for facilitating comparison of algorithm results. Each layer network contains 4 unknown parameters, λ, μ, η, τ. The L-ADMM algorithm needs to set the unknown parameters firstly when in use, and the network also needs to initialize the parameters before use similar to the L-ADMM algorithm, so that the parameter initialization setting of each layer of the network is consistent with the initial value of the L-ADMM algorithm for facilitating the comparison of algorithm results. In the training stage, the L-ADMM-net can continuously update the parameters of each layer according to the loss function; in the testing stage, the network parameters do not need to be manually adjusted, and imaging can be directly carried out. Compared with the L-ADMM algorithm, the L-ADMM-net does not need manual parameter adjustment when used every time, and has stronger adaptability and higher stability to ISAR imaging in different scenes.
Step S30: and constructing a corresponding loss function of the deep learning imaging algorithm and selecting a training data set.
Further, the step of constructing a loss function corresponding to the deep learning imaging algorithm and selecting a training data set includes: constructing a loss function corresponding to the deep learning imaging algorithm; setting a one-dimensional range profile H of the micro-motion target under the sparse aperture condition as an input end of training set data, setting an output end of the one-dimensional range profile H as an ISAR image X of a main body part under the sparse aperture condition, and setting label data as an ISAR image X of the main body part under full aperture echo.
Further, the step of constructing the corresponding loss function of the deep learning imaging algorithm includes: the following loss function was constructed:
where N represents the total number contained in the training data set,the F-norm of the matrix is represented,representing the ISAR image of the target main body part obtained by calculating the target one-dimensional range profile H under the condition of sparse aperture through a neural network,representing the main part ISAR image under full aperture echo, gamma represents the regularization parameter,a norm of a vector is represented,indicating that the matrix is stretched into a one-dimensional vector in the order of columns.
Further, after the step of setting the one-dimensional distance image H of the micro-motion target under the sparse aperture condition as the input end of the training set data, setting the output end of the one-dimensional distance image H of the micro-motion target under the sparse aperture condition as the ISAR image X of the main body part under the sparse aperture condition, and setting the label data as the ISAR image X of the main body part under the full aperture echo, the method further includes: setting a preset number of network layers, and automatically adjusting parameters through learning training to obtain a mapping model between a one-dimensional range profile H of a target under a sparse aperture and a main body portion ISAR image X.
Step S40: and initializing and setting network parameters, and setting a parameter updating optimization algorithm and the number of training rounds.
Further, the step of initializing and setting the network parameters, and setting the parameter updating optimization algorithm and the number of training rounds includes: updating and optimizing the parameters by adopting an Adam algorithm, wherein the principle of the Adam algorithm is as follows:
wherein m istFor first order estimation of the gradient, beta1Is a root of Chao ShenNumber, vtFor second order estimation of the gradient, beta2Is a hyper-parameter, xi is a loss function, and theta is the solved parameter; after the deviation correction, the following results are obtained:
wherein the content of the first and second substances,the corrected first order estimation value and the second order estimation value of the gradient are obtained; in addition, the specific update formula of the neural network parameters is as follows:
where t is the number of iterations, θt+1Is the value of the parameter at time t + 1, θtAnd alpha is the learning rate of the neural network model, and epsilon is an error constant.
Step S50: and updating and adjusting the parameters through an optimization algorithm.
Step S60: and when the training round number is met, saving the optimal network parameters to output an optimal algorithm model.
In one implementation, as shown in fig. 3, the sisner aircraft includes a propeller inching element in its nose portion, which would generate an m-D effect, affecting the quality of the ISAR image, and due to the presence of the propeller, the image has significant side lobe interference in its nose portion, and significant noise.
In a specific implementation, as shown in fig. 4, the R-D algorithm is the worst imaging effect due to the destroyed echo pulse coherence under the condition of 50% random sparse mode; compared with an R-D algorithm, the Chirplet algorithm can remove part of the micromotion effect, but the background noise caused by sparse aperture is still very obvious; the L-ADMM well removes most of background noise generated by sparse apertures and a micromotion effect, an imaging result is greatly improved compared with an R-D algorithm and a Chirplet algorithm, and part of strong interference noise cannot be removed; compared with the L-ADMM algorithm, the L-ADMM-net result has an improved denoising effect, and can remove micro-motion interference noise which cannot be eliminated by the L-ADMM.
In the specific implementation, as shown in fig. 5, under the condition of 25% sparsity, the defocus degrees of the R-D, Chirplet algorithm and the L-ADMM algorithm are increased, and the L-ADMM-net method can still acquire high-resolution images, which further illustrates that the L-ADMM-net can achieve better imaging of the micro-motion target under the sparsity condition.
In a specific implementation, as shown in fig. 6, under 12.5% sparse condition, the defocus degree of the R-D, Chirplet algorithm and the L-ADMM algorithm is greatly increased, and the L-ADMM-net can still obtain a high resolution image of the target, which further verifies the effectiveness of the algorithm of the present invention.
In the embodiment, radar echo data are obtained, and preprocessing and modeling are performed according to the echo data; establishing a model-driven deep learning imaging algorithm by using a deep expansion frame; constructing a loss function corresponding to the deep learning imaging algorithm and selecting a training data set; carrying out initialization setting on network parameters, and setting a parameter updating optimization algorithm and the number of rounds of training; updating and adjusting the parameters through an optimization algorithm; when the training round number is met, the optimal network parameters are stored to output an optimal algorithm model, and the efficiency of the radar in imaging the micro-motion target under the sparse aperture condition is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., a rom/ram, a magnetic disk, an optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (8)
1. A sparse aperture micro-motion target ISAR imaging method based on a depth expansion network is characterized by comprising the following steps:
radar echo data are obtained, and preprocessing and modeling are carried out according to the echo data;
establishing a model-driven deep learning imaging algorithm by using a deep expansion frame;
constructing a loss function corresponding to the deep learning imaging algorithm and selecting a training data set;
carrying out initialization setting on network parameters, and setting a parameter updating optimization algorithm and the number of rounds of training;
updating and adjusting the parameters through an optimization algorithm;
and when the training round number is met, saving the optimal network parameters to output an optimal algorithm model.
2. The method of claim 1, wherein the steps of acquiring radar echo data, preprocessing and modeling from the echo data, comprise:
acquiring radar echo data, and performing translation compensation on a one-dimensional range profile of the radar echo data, wherein the translation compensation comprises the following steps: envelope alignment and autofocusing;
the one-dimensional distance image formula after translational compensation is as follows:
wherein the content of the first and second substances,representing a translation compensated one-dimensional range profile sequence of the target,tmrespectively representing a fast time and a slow time, wherein M is 1,2, L, M and M represent the number of pulses contained in a full-aperture radar echo, fcB, c respectively represent the center frequency, bandwidth and propagation velocity of the radar signal, σpAnd Rp(tm) Respectively representing the reflection coefficient of the p-th scattering center of the target body part and the instantaneous rotation distance, sigma, of the relative radarqAnd Rq(tm) Respectively representing the reflection coefficient of the qth scattering center of the target micro-motion component and the instantaneous rotation distance relative to the radar, wherein P is 1,2, L, P, Q is 1,2, L, Q, P represents that the target main body part contains P scattering centers, and Q represents that the target micro-motion component contains Q scattering centers;
and modeling according to the one-dimensional distance image formula.
3. The method of claim 2, wherein the step of modeling according to the one-dimensional range profile formula comprises:
under sparse aperture conditions, the one-dimensional range profile formula can be expressed in the form of the following matrix:
H=L+S
whereinAndrespectively representing a one-dimensional range image sequence of the target, the target main body part and the target micro-motion component,expressing a complex matrix of K' N, wherein K and N respectively express the number of pulses and the number of distance units of the sparse aperture one-dimensional distance image sequence; for sparse aperture data, the number of pulses is less than that contained in full aperture radar echo, namely K<M, and the set of pulse sequence numbers is a subset of full aperture pulse sequence numbers, i.e.Wherein i represents a sparse aperture pulse sequence number set;
for the belt micro-motion component, the ISAR image of the target main body part and the one-dimensional range profile sequence L of the target main body part are mutually Fourier transform pairs, namely:
L=PX
whereinAn ISAR image representing a subject body portion of the subject,representing a partial Fourier matrix, assuming a complete Fourier matrix ofThen P is decimated by decimating some of the row vectors in X to combineThe sequence number set of the row vector is a sparse aperture pulse sequence number set i;
adding constraint conditions: firstly, the column correlation of a one-dimensional range profile sequence L of a target main body part is strong, and the low-rank characteristic is achieved; secondly, energy of the one-dimensional range profile sequence S of the target micro-motion component is distributed in different range units, and the target micro-motion component has a sparse characteristic; thirdly, the ISAR image of the target main body generally consists of a few scattering centers and has strong sparse characteristic; the modeling can be as follows:
min||L||*+λ||S||1+μ||X||1
s.t.H=L+S
L=PX
wherein | · | purple*(ii) counting & lt | & gt | & lt | & gt1Respectively representing the kernel norm and l of the matrix1Norm, which is respectively used for representing the rank and the sparsity of the matrix; λ and μ represent regularization parameters, which are used for adjusting weights of matrix decomposition and ISAR imaging, respectively, and λ ═ μ ═ 0.5;
modeling the triple constraint underdetermined problem to finally obtain the following model:
and (3) solving by adopting a linear alternating direction multiplier algorithm to obtain:
Y1 (k+1)=Y1 (k)+ρ1 (k)(H-L(k+1)-S(k+1))
Y2 (k+1)=Y2 (k)+ρ2 (k)(L(k+1)-PX(k+1))
ρ1 (k+1)=ηρ1 (k)
ρ2 (k+1)=ηρ2 (k)
wherein<·,·>Representing the inner product of two matrices, Y1、Y2Representing the Lagrange multiplier matrix, p1、ρ2Represents a penalty factor, | · |. non-woven phosphorFAn F norm representing a matrix; (.)(k)Representing the variable obtained by the kth iteration, and eta representing a rising factor for controlling the penalty factor rho1、ρ2The rising tendency of (a) to (b),wherein, PHRepresents the conjugate transpose of the partial fourier matrix P;the singular value contraction factor is expressed, and specifically, for an arbitrary matrix a and an arbitrary scalar γ, there are:
wherein A ═ Udiag (sigma) VHRepresents A; singular value decomposition, U, V is unitary matrix, sigma represents singular value vector of A, and diag (-) represents diagonal matrix formed by vector;representing soft-threshold operators, for arbitrary scalars x, γ, havingWherein sgn (·) represents a sign operator; for an arbitrary vector x, there areWherein xnRepresenting the nth element of the vector x.
Until the relative error (| X) of the ISAR image of the target main body part obtained by two adjacent iterations(k+1)-X(k)|/|X(k)And |)) is smaller than a set threshold, the ISAR image X of the target main body part with the micro-motion component under the condition of sparse aperture can be obtained.
4. The method of claim 1, wherein the step of building a model-driven deep learning imaging algorithm using a depth expansion framework comprises:
developing a solution algorithm-L-ADMM algorithm of the micro-motion target ISAR image under the sparse aperture condition into a cascade network;
setting the iteration number of the L-ADMM algorithm as n, and setting the layer number of the cascading network as n;
each layer network comprises 4 unknown parameters which are lambda, mu, eta and tau respectively, and the L-ADMM algorithm sets the unknown parameters when in use;
the initialization setting of the parameters of each layer of the network is consistent with the initial value of the L-ADMM algorithm, so that in the training stage, the L-ADMM-net continuously updates the parameters of each layer according to the loss function, and in the testing stage, the network parameters are directly imaged.
5. The method of claim 1, wherein the step of constructing a corresponding loss function for the deep learning imaging algorithm and selecting a training data set comprises:
constructing a loss function corresponding to the deep learning imaging algorithm;
setting a one-dimensional range profile H of the micro-motion target under the sparse aperture condition as an input end of training set data, setting an output end of the one-dimensional range profile H as an ISAR image X of a main body part under the sparse aperture condition, and setting label data as an ISAR image X of the main body part under full aperture echo.
6. The method of claim 4, wherein the step of constructing the corresponding loss function of the deep learning imaging algorithm comprises:
the following loss function was constructed:
where N represents the total number contained in the training data set,the F-norm of the matrix is represented,representing the ISAR image of the target main body part obtained by calculating the target one-dimensional range profile H under the condition of sparse aperture through a neural network,representing the main part ISAR image under full aperture echo, gamma represents the regularization parameter,a norm of a vector is represented,indicating that the matrix is stretched into a one-dimensional vector in the order of columns.
7. The method of claim 5, wherein after the step of setting the one-dimensional range profile H of the micro-motion target under the sparse aperture condition as the input end of the training set data, the output end of the training set data is the ISAR image X of the main body under the sparse aperture condition, and the label data is the ISAR image X of the main body under the full aperture echo, the method further comprises:
setting a preset number of network layers, and automatically adjusting parameters through learning training to obtain a mapping model between a one-dimensional range profile H of a target under a sparse aperture and a main body portion ISAR image X.
8. The method of claim 1, wherein the steps of initially setting network parameters and setting parameter update optimization algorithms and training rounds comprise:
updating and optimizing the parameters by adopting an Adam algorithm, wherein the principle of the Adam algorithm is as follows:
wherein m istFor first order estimation of the gradient, beta1Is a hyperparameter, vtFor second order estimation of the gradient, beta2Is a hyper-parameter, xi is a loss function, and theta is the solved parameter;
after the deviation correction, the following results are obtained:
wherein the content of the first and second substances,the corrected first order estimation value and the second order estimation value of the gradient are obtained;
in addition, the specific update formula of the neural network parameters is as follows:
where t is the number of iterations, θt+1Is the value of the parameter at time t +1, θtAnd alpha is the learning rate of the neural network model, and epsilon is an error constant.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210013146.4A CN114325707A (en) | 2022-01-06 | 2022-01-06 | Sparse aperture micro-motion target ISAR imaging method based on depth expansion network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210013146.4A CN114325707A (en) | 2022-01-06 | 2022-01-06 | Sparse aperture micro-motion target ISAR imaging method based on depth expansion network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114325707A true CN114325707A (en) | 2022-04-12 |
Family
ID=81025038
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210013146.4A Pending CN114325707A (en) | 2022-01-06 | 2022-01-06 | Sparse aperture micro-motion target ISAR imaging method based on depth expansion network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114325707A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114841901A (en) * | 2022-07-01 | 2022-08-02 | 北京大学深圳研究生院 | Image reconstruction method based on generalized depth expansion network |
CN115327544A (en) * | 2022-10-13 | 2022-11-11 | 中国人民解放军战略支援部队航天工程大学 | Little-sample space target ISAR defocus compensation method based on self-supervision learning |
-
2022
- 2022-01-06 CN CN202210013146.4A patent/CN114325707A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114841901A (en) * | 2022-07-01 | 2022-08-02 | 北京大学深圳研究生院 | Image reconstruction method based on generalized depth expansion network |
CN114841901B (en) * | 2022-07-01 | 2022-10-25 | 北京大学深圳研究生院 | Image reconstruction method based on generalized depth expansion network |
CN115327544A (en) * | 2022-10-13 | 2022-11-11 | 中国人民解放军战略支援部队航天工程大学 | Little-sample space target ISAR defocus compensation method based on self-supervision learning |
CN115327544B (en) * | 2022-10-13 | 2023-01-10 | 中国人民解放军战略支援部队航天工程大学 | Little-sample space target ISAR defocus compensation method based on self-supervision learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yonel et al. | Deep learning for passive synthetic aperture radar | |
CN111610522B (en) | SA-ISAR imaging method for target with micro-motion component based on low-rank and sparse combined constraint | |
CN114325707A (en) | Sparse aperture micro-motion target ISAR imaging method based on depth expansion network | |
CN110161499B (en) | Improved sparse Bayesian learning ISAR imaging scattering coefficient estimation method | |
CN110726992B (en) | SA-ISAR self-focusing method based on structure sparsity and entropy joint constraint | |
CN109597075B (en) | Imaging method and imaging device based on sparse array | |
CN111580104B (en) | Maneuvering target high-resolution ISAR imaging method based on parameterized dictionary | |
CN113567982B (en) | Directional periodic sampling data sparse SAR imaging method and device based on mixed norm | |
Moses et al. | An autoregressive formulation for SAR backprojection imaging | |
CN113030972B (en) | Maneuvering target ISAR imaging method based on rapid sparse Bayesian learning | |
Hu et al. | Inverse synthetic aperture radar imaging exploiting dictionary learning | |
CN110109098B (en) | Scanning radar rapid super-resolution imaging method | |
CN112147608A (en) | Rapid Gaussian gridding non-uniform FFT through-wall imaging radar BP method | |
CN111948652B (en) | SAR intelligent parameterized super-resolution imaging method based on deep learning | |
CN113466864A (en) | Fast joint inverse-free sparse Bayesian learning super-resolution ISAR imaging algorithm | |
CN112099010B (en) | ISAR (inverse synthetic aperture radar) imaging method for target with micro-motion component based on structured non-convex low-rank representation | |
CN112946644B (en) | Based on minimizing the convolution weight l1Norm sparse aperture ISAR imaging method | |
CN115453523A (en) | Scanning radar sparse target batch processing super-resolution method | |
CN116027293A (en) | Rapid sparse angle super-resolution method for scanning radar | |
CN113640793B (en) | MRF-based real aperture scanning radar super-resolution imaging method | |
CN113030964B (en) | Bistatic ISAR (inverse synthetic aperture radar) thin-aperture high-resolution imaging method based on complex Laplace prior | |
Tuo et al. | Radar Forward-Looking Super-Resolution Imaging Using a Two-Step Regularization Strategy | |
CN115453536A (en) | Forward-looking azimuth pitching two-dimensional super-resolution imaging method for motion platform | |
Han et al. | One-bit radar imaging via adaptive binary iterative hard thresholding | |
Nazari et al. | Sparse recovery using modified sl0 algorithm by weighted projection and application to ISAR imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |