CN112099008B - SA-ISAR imaging and self-focusing method based on CV-ADMMN - Google Patents
SA-ISAR imaging and self-focusing method based on CV-ADMMN Download PDFInfo
- Publication number
- CN112099008B CN112099008B CN202010975711.6A CN202010975711A CN112099008B CN 112099008 B CN112099008 B CN 112099008B CN 202010975711 A CN202010975711 A CN 202010975711A CN 112099008 B CN112099008 B CN 112099008B
- Authority
- CN
- China
- Prior art keywords
- matrix
- representing
- admmn
- vector
- self
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
- G01S13/904—SAR modes
- G01S13/9064—Inverse SAR [ISAR]
Abstract
The invention belongs to the field of radar imaging, and particularly relates to a SA-ISAR imaging and self-focusing method based on CV-ADMMN, which comprises the following steps: s1, modeling the moving target one-dimensional range profile sequence; s2 modeling the moving target sparse aperture ISAR imaging scene; s3, establishing an ADMM reconstruction model of the moving target sparse aperture ISAR imaging problem; s4, establishing a CV-ADMMN network structure model; s5, solving the sparse aperture ISAR imaging problem by using a CV-ADMMN network structure; the beneficial effects obtained by the invention are as follows: the invention can realize the sparse aperture ISAR imaging and self-focusing of the moving target, can quickly reconstruct the complete radar image under the sparse aperture condition and realize the phase error compensation. The algorithm performance has weak dependence on parameter selection, and further better reconstruction performance is obtained. The method has important engineering application value for sparse aperture ISAR imaging and self-focusing under the condition of data loss.
Description
Technical Field
The invention belongs to the field of radar imaging, and particularly relates to a target Sparse aperture inverse synthetic aperture radar (SA-ISAR) imaging and self-focusing method based on a Complex-domain alternating direction multiplier network (CV-ADMMN).
Background
The Inverse Synthetic Aperture Radar (ISAR) imaging technology can be used for high-resolution imaging of targets, has the all-weather characteristic all day long, and is widely applied to the civil and military fields.
SA-ISAR imaging refers to imaging a target using sparse aperture radar returns. Sparse aperture echo refers to an incomplete echo received by the radar. In general, environment and radar receiver noise, a 'wide-narrow' alternative mode of a multifunctional radar, a random sampling mode of a compressed sensing radar, a target switching mode of a multi-channel radar, and the like all result in sparse aperture echoes. Under sparse aperture conditions, the conventional Fast Fourier Transform (FFT) method cannot image the azimuth unit because the correlation between echoes is severely destroyed. At this time, the imaging result can be solved by sparsity prior iteration of the radar image by using a convex optimization method. The convex optimization method is sensitive to model parameter selection, and selection of different parameters greatly affects the algorithm effect. In practical application, parameters need to be finely adjusted manually, which brings inconvenience to engineering application.
The self-focusing can compensate the phase error generated by the translation of the moving object, thereby realizing fine translation compensation. Under the condition of sparse aperture, due to data loss, the traditional self-focusing method is difficult to obtain good effect, and image defocusing is caused. Therefore, under the condition of sparse aperture, the method has important engineering application value for efficiently imaging and self-focusing the moving target.
Disclosure of Invention
The invention aims to solve the technical problems that under the condition of sparse aperture, the traditional moving target ISAR imaging method has strong parameter sensitivity, the traditional self-focusing method has poor effect, and the engineering application requirements are difficult to meet.
The invention provides an SA-ISAR imaging and self-focusing method based on CV-ADMMN, aiming at the problems that an imaging algorithm has strong sensitivity on parameter selection and a traditional self-focusing method has poor effect under the condition of sparse aperture. The method is based on a deep learning network model, applies a traditional Alternating direction multiplier (ADMM) to the SA-ISAR problem by using a deep expansion mode, and models the SA-ISAR problem into the deep learning network model. The network is trained on the data set to adaptively adjust the algorithm parameters. In order to improve the self-focusing effect under the sparse aperture condition, the self-focusing module based on the minimum entropy is embedded in the existing network structure, so that a complete CV-ADMMN structure is formed. The structure can reconstruct an original radar image from a sparse aperture one-dimensional distance direction sequence through network forward propagation.
The technical scheme adopted by the invention for solving the technical problems is as follows: a SA-ISAR imaging and self-focusing method based on CV-ADMMN comprises the following steps:
s1, modeling the moving target one-dimensional range profile sequence:
translational compensation is the first link of ISAR imaging, and the technical route thereof is relatively mature (shining under Sharp, Meng, Wang Tong radar imaging technology [ M ]. Beijing: electronics industry Press, 2005) after decades of development, so the present invention assumes that translational compensation of a target has been completed. The radar transmits a Linear Frequency Modulation (LFM) signal, and the two-dimensional echo received for a moving target can be modeled as:
wherein the content of the first and second substances,t represents fast time and full time, respectively, andtmindicating a slow time. SigmaiAnd RiRespectively representing the reflection coefficient of the ith scattering center and the instantaneous rotation distance, f, relative to the radarcC and gamma respectively represent the center frequency of the radar signal, the vacuum light speed and the signal frequency modulation. Due to the short integration time of ISAR imaging, the motion of the target within one pulse time can be neglected when modeling the echo.
The signal expression obtained after the two-dimensional signal shown in the formula (1) is subjected to line-off frequency modulation is as follows:
under sparse aperture conditions, the fast time echo pulse waveform remains unchanged, so the signal shown in equation (2) is in fast timePerforming FFT to obtain a target one-dimensional range profile sequence;
s2, modeling the moving target sparse aperture ISAR imaging scene:
under the condition of sparse aperture, the observation of the radar system to the moving target can be represented by the following down-sampling model:
y=Φx+n=dfx+n (3)
wherein the content of the first and second substances,representing a radar image column vector formed by an image matrixThe method is obtained by rearranging along the columns,representing the MN-dimensional complex column vector,representing an M multiplied by N dimensional complex matrix, wherein M represents the number of azimuth units of the radar image, and N represents the number of range units;representing a received radar one-dimensional range profile vector formed by a one-dimensional range profile matrixIs obtained by rearrangement along the column, L represents the number of the one-dimensional range profile after the down sampling, L<<M;Represents a down-sampling matrix that is down-sampled,representing a gaussian white noise vector rearranged along a column;represents a block Fourier transform matrix, which can be expressed asWherein INRepresents an N × N dimensional identity matrix, and F represents an M × M dimensional fourier transform matrix.Represents a block down-sampling matrix, which can be expressed asWhere D represents an L × M dimensional down-sampling matrix, the elements consisting of 0 and 1. Let V represent the distance image index being sampled, thenFor the i row and m column elements D in the matrix Dl,mWhen the l-th element V of the vector V islWhen m, there is Dl,m=1,l=1,2,…,L,m=1,2,…,M。
The down-sampling model given by equation (3) models sparse aperture imaging as the solution of a linear underdetermined inverse problem, which can be solved by using a Compressive Sensing (CS) method (d.l.donoho, "Compressive sensing," IEEE Transactions on Information Theory, vol.52, No.4, pp.1289-1306,2006.);
s3, establishing an ADMM reconstruction model of the moving target sparse aperture ISAR imaging problem:
for the down-sampling model given by equation (3), it is solved using conventional ADMM:
s3.1, constructing an optimization model as follows:
where z is an introduced intermediate variable and λ represents a regularization parameter.
S3.2, aiming at the optimization model of the formula (4), obtaining an augmented Lagrange function:
in the formula (5), α represents a lagrangian multiplier, ρ represents a penalty factor, | · | | luminance2Represents a vector l2Norm, | · | luminance1Representing a vector or matrix of l1And (4) norm.
S3.3, converting the optimization problem of the formula (4) into the following subproblems by using the formula (5) to carry out iterative solution:
k represents the number of iterations; by substituting formula (5) for formula (6), x can be obtained(k)And z(k)And finally obtaining the complete iteration steps as follows:
wherein the content of the first and second substances,representing soft threshold operators, with any complex scalar x and real threshold tFor any complex vector x and real threshold t, there areWherein xiRepresents the ith element of the complex phasor x;
s4, establishing a CV-ADMMN network structure model:
each iteration of equation (7) includes x(k)、z(k)、α(k)Three calculation steps, which correspond to three different network layers: x is the number of(k)Referred to as the k-th reconstruction layer, z(k)Called the kth noise reduction layer, α(k)Is called asThe kth Lagrangian multiplier updates the layer. X is to be(k)、z(k)、α(k)And connecting the structures in sequence to obtain a kth-level structure, and repeatedly cascading the structures to obtain the CV-ADMMN model.
In order to reduce the operation amount in practical application, the vector expression in the expression (7) is rearranged into a matrix form, and the following CV-ADMMN forward propagation expression can be obtained:
wherein the content of the first and second substances,respectively representing a penalty factor of a kth denoising layer, a regularization parameter of the kth denoising layer, a penalty factor of a kth Lagrange multiplier updating layer and a penalty factor of a kth reconstruction layer. The above parameters are all independently adjustable parameters. Mask represents an M × N sparse sampling Mask matrix, elements of the M × N sparse sampling Mask matrix are composed of 0 and 1, positions of original signals which are sampled and reserved are 1, and otherwise, the positions are 0. 1M×NRepresenting an all 1 matrix of size M × N. Z(k)And A(k)Respectively represent the intermediate variable z(k)And lagrange multiplier alpha(k)The matrix obtained after the rearrangement is performed,a fourier transform matrix is represented which is,representing a down-sampled one-dimensional range profile matrix, each row of the matrix representing a set of one-dimensional range profiles. And Y is network input.
S5, solving the sparse aperture ISAR imaging problem by using CV-ADMMN:
s5.1 training CV-ADMMN:
s5.1.1, constructing a data set similar to an actual application scene. The data set comprises a plurality of groups of distance image-label data pairsWhereinRepresenting the q-th set of sparse aperture one-dimensional range profile matrices,representing the q-th group of image tags. And sequentially inputting the data in the data set into the CV-ADMMN model generated by S4, and training the CV-ADMMN.
S5.1.2 define two loss functions as follows:
wherein the content of the first and second substances,represents the input YqThe reconstructed image obtained by time network output, xi represents a penalty coefficient, | | · | | non calculationFAnd representing the F norm of the matrix, Q representing the total number of data contained in the data set, and abs (-) representing the matrix or vector obtained by modulo the matrix or vector element by element. L is1The loss function represents the Root Mean Square Error (RMSE), L, of the label image and the reconstructed image2Loss function representation of RMSE superposition l of label image and reconstructed image1Norm regularization term. Loss function L2Better results are often obtained under low signal-to-noise ratio conditions, and the loss function L1Is often suitable for high signal-to-noise ratio conditions.
S5.1.3 use Complex field Back Propagation (BP) and gradient descent algorithm (G.M. Georgiou and C.Koutsouugeras, "Complex domain Back propagation," in IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, No.5, pp.330-334, May 1992) to update network parameters. The complex derivative employed follows the definition:
where O represents a real number and O represents a complex matrix or vector. Re {. and Im {. can represent the real and imaginary parts of the complex phasor, respectively.
The back propagation process requires solving the partial derivatives of the loss function for each network layer and its parameters. For convenience of representation, the formula derivation in the part adopts a vector form which is not rearranged into a matrix, and in practical application, CV-ADMMN is still realized through the matrix form.
S5.1.3.1 define the vector form of the network outputIs composed ofComputing pairs of loss functions along the vectors obtained by the rearrangement of the columnsDerivative of (a):
wherein the content of the first and second substances,representing label image vectors, by a matrix of label imagesResulting in a rearrangement along the columns, the symbol |, indicates a matrix or vector element-by-element multiplication,representing a matrix or vector divided element by element;
S5.1.3.2CV-ADMMN the partial derivatives of the loss function for each layer can be represented by the backward corresponding network layer derivatives. After the partial derivative of the output layer is obtained, the partial derivative of each layer can be further solved through a chain rule, and the specific expression of the chain rule is as follows:
wherein L denotes a loss function L1Or L2。
S5.1.3.3 obtaining the loss function for the (k +1) th reconstruction layer x using equation (12)(k+1)(k +1) th noise reduction layer z(k+1)(k +1) th Lagrange multiplier update layer alpha(k+1)After partial derivative, the gradient of the parameter to be solved in each layer can be further calculated, and the specific expression is expressed in a matrix form as follows:
where sum (-) denotes summing all elements of the matrix.
S5.1.3.4 the parameters are updated by gradient descent method during training. For the network parameters in the kth level structure body, the updating expression is as follows
Wherein, the first and the second end of the pipe are connected with each other,respectively represent the parameters after the current structural body is updated, and eta represents the learning rate of parameter updating.
S5.1.3.5 when the parameter is updated to a gradient of approximately 0, the training is stopped, and a CV-ADMMN model with fixed parameters is obtained.
S5.2 embedding a self-focusing module based on minimum entropy:
s5.2.1, constructing a sparse aperture observation scene containing a phase error:
y=edfx+n (15)
wherein the content of the first and second substances,E=diag[exp(jφ1),exp(jφ2),...,exp(jφL)]representing the phase error in the one-dimensional range profile, philIndicating the phase error in the l-th one-dimensional distance direction.
S5.2.2 in the above model, E is unknown phase error matrix, and in order to realize the self-focusing function, the invention estimates E by the minimum entropy method. For any reconstructed layer output X in equation (8)(k)The estimation result of the phase error matrix E is given by:
wherein phi is [ phi ]1,φ2,...,φL],e(X(k)(phi)) represents matrix X(k)Entropy of (φ), the expression is as follows:
wherein the content of the first and second substances,representative matrix X(k)The elements of row i and column j,representing the total energy of the matrix. And the phase error value that minimizes the entropy is obtained by solving the following equation:
where L1, 2.., L, the value of vector phi may ultimately be solved for.
X in the formula (8)(k)The analytical expression is substituted into formula (18) to obtain philThe analytical expression of (c):
wherein Y is.lRepresenting the l-th column of the matrix Y. 0 represents a matrix of all 0 elements. The self-focusing function can be realized by using the phase error obtained by estimation.
S5.2.3 embedding the self-focusing module into the CV-ADMMN structure by using formula (19) analysis expression, obtaining the CV-ADMMN forward propagation expression with the self-focusing function:
by using the formula (20), a CV-ADMMN model with a self-focusing function can be constructed. Unknown parameters in the formula (20) Obtained by the training step of S5.1. Compared with the model of the formula (8), the model constructed by the formula (20) can adaptively compensate the initial phase in the radar signal, and has wider application scenarios.
S5.3 sparse aperture imaging and self-focusing by using CV-ADMMN embedded into self-focusing module
S5.3.1, acquiring actual observation sparse aperture echo, and obtaining a sparse aperture one-dimensional range profile sequence through fast time FFT. And carrying out translation coarse compensation on the one-dimensional range profile sequence by using a cross-correlation method. (have luck, Chen Meng, Wang Tong. Radar imaging technology [ M ]. Beijing: electronic industry Press, 2005)
S5.3.2, inputting the roughly compensated one-dimensional range image sequence Y into CV-ADMMN, and carrying out forward propagation through the network to obtain a high-quality ISAR image X.
The invention has the following beneficial effects: the invention can realize the sparse aperture ISAR imaging and self-focusing of the moving target, can quickly reconstruct the complete radar image under the sparse aperture condition and realize the phase error compensation. The algorithm performance has weak dependence on parameter selection, and further better reconstruction performance is obtained. The method has important engineering application value for sparse aperture ISAR imaging and self-focusing under the condition of data loss.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of CV-ADMMN construction;
FIG. 3 is a diagram of a CV-ADMMN architecture embedded in a self-focusing module;
FIG. 4 full pore size conditions: (a) a target one-dimensional range profile sequence; (b) a target ISAR image;
fig. 5 shows the sparsity of 25% under the sparse aperture condition and in the presence of phase error: (a) a target one-dimensional range profile sequence; (b) obtaining a target ISAR image by a range-Doppler method; (c) the invention utilizes L1Training the obtained ISAR image by using a loss function; (d) the invention utilizes L2Training the obtained ISAR image by using a loss function;
fig. 6 considers raw data containing phase errors and randomly extracts 64 pulses from them to simulate sparse aperture data with 25% sparsity: (a) one-dimensional distance direction of the target; (b) imaging results of the traditional RD method; (c) from L1A CV-ADMMN imaging result of the embedded self-focusing module trained by the loss function; (d) from L2And (4) losing the CV-ADMMN imaging result of the function training embedded self-focusing module.
Detailed Description
The invention is further illustrated with reference to the accompanying drawings:
FIG. 1 is a flow chart of the present invention.
Fig. 2 and 3 show the CV-ADMMN structure and the CV-ADMMN structure embedded in the self-focusing module, respectively. The invention provides a complex domain ADMM-Net based target SA-ISAR imaging and self-focusing method, which comprises the following steps:
s1, modeling the moving target one-dimensional range profile sequence;
s2 modeling the moving target sparse aperture ISAR imaging scene;
s3, establishing an ADMM reconstruction model of the moving target sparse aperture ISAR imaging problem;
s4, establishing a CV-ADMMN network structure model;
s5, solving the sparse aperture ISAR imaging problem by using a CV-ADMMN network structure;
fig. 4(a) and 4(b) show a target one-dimensional range profile sequence and an ISAR image under the full aperture condition of a target actually measured by a radar. The radar emission signal parameters are as follows: the center frequency is 5.52GHz, the bandwidth is 400MHz, and the pulse width is 25.6 mus. The full aperture data contains 256 pulses, each pulse containing 256 sample points.
64 pulses were randomly extracted from the full aperture data without phase error to simulate sparse aperture data with 25% sparsity. At this time, the target one-dimensional distance direction is as shown in fig. 5 (a). The sparse aperture data is further imaged using the conventional Range-Doppler (RD) method and the present invention. The obtained ISAR images are shown in fig. 5(b), (c), and (d), respectively. Wherein, the invention adopts two different loss functions to train, and obtains two different network structures, and the graphs (c) and (d) are respectively corresponding to L1、L2And (5) obtaining an imaging result after the loss function training. As can be seen from fig. 5(b), due to the sparse aperture effect, the correlation between pulses is seriously destroyed, and it is difficult for the RD algorithm to obtain an image with good focusing effect. As can be seen from fig. 5(c) and (d), the ISAR image obtained by the present invention has a good focusing effect.
Further, consider raw data containing phase errors and randomly draw 64 pulses from it to simulate sparse aperture data with 25% sparsity. At this time, the target one-dimensional distance direction is as shown in fig. 6 (a). The conventional no-self-focusing RD method is used for comparison with the present invention. The imaging result of the conventional RD method is shown in fig. 6 (b). At this time, not only the correlation between pulses is destroyed, but also a phase error is superimposed on the echo data, and the conventional RD method cannot adaptively compensate the phase error, so that imaging cannot be completed. FIGS. 6(c) and (d) show the general formula L1、L2And (4) losing the CV-ADMMN imaging result of the function training embedded self-focusing module. As shown in FIGS. 6(c) and (d), the present inventionThe method overcomes the defect of correlation between echo pulses, and can correctly compensate phase errors.
In conclusion, the invention can effectively realize the functions of imaging and self-focusing of the moving target under the condition of sparse aperture, has good effect on the sparse aperture data with the sparsity of 25 percent, and has higher engineering application value.
Claims (1)
1. A SA-ISAR imaging and self-focusing method based on CV-ADMMN is characterized by comprising the following steps:
s1, modeling the moving target one-dimensional range profile sequence:
the radar transmits a chirp signal, and the two-dimensional echo received for a moving target can be modeled as:
wherein the content of the first and second substances,t represents fast time and full time, respectively, andtmrepresents a slow time; sigmaiAnd RiRespectively representing the reflection coefficient of the ith scattering center and the instantaneous rotation distance, f, relative to the radarcC and gamma respectively represent the center frequency of a radar signal, the vacuum light speed and the signal frequency modulation; due to the short integration time of ISAR imaging, the movement of the target in one pulse time can be ignored when the echo is modeled;
the signal expression obtained after the two-dimensional signal shown in the formula (1) is subjected to line-splitting frequency modulation is as follows:
under sparse aperture conditionsSince the fast echo pulse waveform remains unchanged, the signal shown in the formula (2) is applied in a fast timePerforming FFT to obtain a target one-dimensional range profile sequence;
s2, modeling the moving target sparse aperture ISAR imaging scene:
under the condition of sparse aperture, the observation of the radar system to the moving target can be represented by the following down-sampling model:
y=Φx+n=dfx+n (3)
wherein the content of the first and second substances,representing a radar image column vector formed by an image matrixThe method is obtained by rearranging along the columns,representing the MN-dimensional complex column vector,representing an M multiplied by N dimensional complex matrix, wherein M represents the number of azimuth units of the radar image, and N represents the number of range units;representing a received radar one-dimensional range profile vector formed from a one-dimensional range profile matrixObtained by rearrangement along the column, L represents the number of the one-dimensional range profile after down sampling, L<<M;Represents a down-sampling matrix that is down-sampled,representing a gaussian white noise vector rearranged along a column;represents a block Fourier transform matrix, which can be expressed asWherein INRepresenting an N × N dimensional identity matrix, and F representing an M × M dimensional Fourier transform matrix;represents a block down-sampling matrix, which can be expressed asWherein D represents an L × M dimensional down-sampling matrix, the elements of which are composed of 0 and 1; let V represent the distance image index being sampled, thenFor the i row and m column elements D in the matrix Dl,mWhen the l-th element V of the vector V islWhen m, there is Dl,m=1,l=1,2,…,L,m=1,2,…,M;
The down-sampling model given by the formula (3) models the sparse aperture imaging into the solution of the linear underdetermined inverse problem, and the solution can be carried out by using a compressed sensing method;
s3, establishing an ADMM reconstruction model of the moving target sparse aperture ISAR imaging problem:
for the down-sampling model given by equation (3), it is solved using conventional ADMM:
s3.1, constructing an optimization model as follows:
wherein z is an introduced intermediate variable, and λ represents a regularization parameter;
s3.2, aiming at the optimization model of the formula (4), obtaining an augmented Lagrange function:
in the formula (5), α represents a lagrangian multiplier, ρ represents a penalty factor, | · | | luminance2Represents a vector l2Norm, | · | luminance1Representing a vector or matrix of l1A norm;
s3.3, converting the optimization problem of the formula (4) into the following subproblems by using the formula (5) to carry out iterative solution:
k represents the number of iterations; by substituting formula (5) for formula (6), x can be obtained(k)And z(k)And finally obtaining the complete iteration steps as follows:
wherein the content of the first and second substances,representing soft threshold operators, with any complex scalar x and real threshold tFor any complex vector x and real threshold t, there areWherein xiRepresents the ith element of the complex phasor x;
s4, establishing a CV-ADMMN network structure model:
each iteration of equation (7) includes x(k)、z(k)、α(k)Three calculation steps, which correspond to three different network layers: x is the number of(k)Referred to as the k-th reconstruction layer, z(k)Referred to as the kth noise reduction layer, α(k)Referred to as the kth lagrangian multiplier update layer; x is to be(k)、z(k)、α(k)Sequentially connecting to obtain a kth-level structural body, and repeatedly cascading the structural bodies to obtain a CV-ADMMN model;
in order to reduce the operation amount in practical application, the vector expression in the expression (7) is rearranged into a matrix form, and the following CV-ADMMN forward propagation expression can be obtained:
wherein the content of the first and second substances,respectively representing a penalty factor of a kth denoising layer, a regularization parameter of the kth denoising layer, a penalty factor of a kth Lagrange multiplier updating layer and a penalty factor of a kth reconstruction layer, wherein the parameters are independently adjustable parameters; mask represents an M multiplied by N sparse sampling Mask matrix, elements of the M multiplied by N sparse sampling Mask matrix are composed of 0 and 1, the position of the original signal which is sampled and reserved is 1, and otherwise, the position is 0; 1M×NRepresents a full 1 matrix of size mxn; z(k)And A(k)Respectively represent the intermediate variable z(k)And lagrange multiplier alpha(k)The matrix obtained after the rearrangement is performed,a fourier transform matrix is represented which is,representing a one-dimensional range profile matrix after down sampling, wherein each row of the matrix represents a group of one-dimensional range profiles, and Y is network input;
s5, solving the sparse aperture ISAR imaging problem by using CV-ADMMN:
s5.1 training CV-ADMMN:
s5.1.1, constructing a data set similar to an actual application scene: the data set comprises a plurality of groups of distance image-label data pairsWhereinRepresenting a q-th set of sparse aperture one-dimensional range-image matrices,representing the q-th group of image tags; sequentially inputting the data in the data set into a CV-ADMMN model generated by S4, and training the CV-ADMMN;
s5.1.2 define two loss functions as follows:
wherein the content of the first and second substances,represents the input YqThe reconstructed image obtained by time network output, xi represents a penalty coefficient, | | · | | non calculationFRepresenting the F norm of the matrix, Q representing the total number of data contained in the data set, abs (-) representing the matrix or vector obtained by modulo the matrix or vector element by element; l is1The loss function represents the root mean square error, L, of the label image and the reconstructed image2Loss function representation of RMSE superposition l of label image and reconstructed image1A norm regularization term; loss function L2Better results are often obtained under low signal-to-noise ratio conditions, and the loss function L1Is often suitable for high signal-to-noise ratio conditions;
s5.1.3 updating network parameters using complex domain back propagation and gradient descent algorithms, the complex derivatives used follow the following definitions:
wherein O represents a real number, O represents a complex matrix or vector, and Re {. cndot.and Im {. cndot.represent the real and imaginary parts of the complex vector, respectively;
in the back propagation process, the partial derivative of the loss function to each network layer and the parameters thereof needs to be solved; for convenience of representation, the derivation of the formulas in this part adopts a vector form which is not rearranged into a matrix, and in practical application, CV-ADMMN is still realized through the matrix form:
s5.1.3.1 define the vector form of the network outputIs composed ofComputing pairs of loss functions along the vectors obtained by the rearrangement of the columnsDerivative of (a):
wherein the content of the first and second substances,representing a label image vector, the symbol |, representing a matrix or vector element-by-element multiplication,representing matrix or vector elementsPerforming element division;
S5.1.3.2CV-ADMMN, the partial derivative of each layer of the loss function can be expressed by backward corresponding network layer derivatives, after the partial derivative of the output layer is obtained, the partial derivative of each layer can be further solved by a chain rule, and the specific chain rule expression is as follows:
wherein L denotes a loss function L1Or L2;
S5.1.3.3 obtaining the loss function for the (k +1) th reconstruction layer x using equation (12)(k+1)(k +1) th noise reduction layer z(k+1)(k +1) th Lagrange multiplier update layer alpha(k+1)After partial derivative, the gradient of the parameter to be solved in each layer can be further calculated, and the specific expression is expressed in a matrix form as follows:
where sum (-) denotes summing all elements of the matrix;
s5.1.3.4, updating the parameters by using a gradient descent method in the training process; for the network parameters in the kth level structure body, the updating expression is as follows
Wherein the content of the first and second substances,respectively representing the updated parameters of the current structural body, wherein eta represents the learning rate of parameter updating;
s5.1.3.5 when the parameter is updated to the gradient of 0, stopping training to obtain a CV-ADMMN model with fixed parameters;
s5.2 embedding a self-focusing module based on minimum entropy:
s5.2.1, constructing a sparse aperture observation scene containing a phase error:
y=edfx+n (15)
wherein the content of the first and second substances,E=diag[exp(jφ1),exp(jφ2),...,exp(jφL)]representing the phase error in the one-dimensional range profile, philRepresenting the phase error in the l one-dimensional distance direction;
s5.2.2 in the above model, E is unknown phase error matrix, in order to realize self-focusing function, the invention estimates E by minimum entropy method; for any reconstructed layer output X in equation (8)(k)The estimation result of the phase error matrix E is given by:
wherein phi is [ phi ]1,φ2,...,φL],e(X(k)(phi)) represents matrix X(k)Entropy of (φ), the expression is as follows:
wherein the content of the first and second substances,representative matrix X(k)The elements of row i and column j,representing the total energy of the matrix; and the phase error value that minimizes the entropy is obtained by solving the following equation:
where L1, 2.., L, the value of the vector Φ may eventually be solved for;
x in the formula (8)(k)The analytical expression is substituted into formula (18) to obtain philThe analytical expression of (1):
wherein, Y.lThe l column of the matrix Y is represented, 0 represents a matrix with all 0 elements, and the self-focusing function can be realized by utilizing the phase error obtained by estimation;
s5.2.3 embedding the self-focusing module into the CV-ADMMN structure by using formula (19) analysis expression, obtaining the CV-ADMMN forward propagation expression with the self-focusing function:
by using the formula (20), a CV-ADMMN model with a self-focusing function can be constructed; unknown parameters in the formula (20) Obtained through the training step of S5.1;
s5.3, performing sparse aperture imaging and self-focusing by using the CV-ADMMN embedded in the self-focusing module:
s5.3.1, acquiring actual observation sparse aperture echo, obtaining a sparse aperture one-dimensional range profile sequence through fast time FFT, and performing translational coarse compensation on the one-dimensional range profile sequence by using a cross-correlation method;
s5.3.2, inputting the roughly compensated one-dimensional range image sequence Y into CV-ADMMN, and carrying out forward propagation through the network to obtain a high-quality ISAR image X.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010975711.6A CN112099008B (en) | 2020-09-16 | 2020-09-16 | SA-ISAR imaging and self-focusing method based on CV-ADMMN |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010975711.6A CN112099008B (en) | 2020-09-16 | 2020-09-16 | SA-ISAR imaging and self-focusing method based on CV-ADMMN |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112099008A CN112099008A (en) | 2020-12-18 |
CN112099008B true CN112099008B (en) | 2022-05-27 |
Family
ID=73760292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010975711.6A Active CN112099008B (en) | 2020-09-16 | 2020-09-16 | SA-ISAR imaging and self-focusing method based on CV-ADMMN |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112099008B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112965064B (en) * | 2021-01-19 | 2022-03-08 | 中国人民解放军国防科技大学 | PCSBL-GAMP-Net-based block sparse aperture ISAR imaging method |
CN112946644B (en) * | 2021-01-28 | 2022-04-19 | 中国人民解放军国防科技大学 | Based on minimizing the convolution weight l1Norm sparse aperture ISAR imaging method |
CN113253269B (en) * | 2021-06-03 | 2021-10-15 | 中南大学 | SAR self-focusing method based on image classification |
CN113253272B (en) * | 2021-07-15 | 2021-10-29 | 中国人民解放军国防科技大学 | Target detection method and device based on SAR distance compressed domain image |
CN113640795B (en) * | 2021-07-27 | 2024-02-13 | 北京理工大学 | SAR intelligent parameterized self-focusing method based on generation countermeasure network |
CN114140325B (en) * | 2021-12-02 | 2024-04-09 | 中国人民解放军国防科技大学 | C-ADMN-based structured sparse aperture ISAR imaging method |
CN115421115A (en) * | 2022-05-23 | 2022-12-02 | 中国人民解放军空军预警学院 | Weight-weighted alternating direction multiplier method for combining phase correction and ISAR imaging |
CN117148347A (en) * | 2023-06-13 | 2023-12-01 | 中国人民解放军空军预警学院 | Two-dimensional joint imaging and self-focusing method based on deep learning network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110244303A (en) * | 2019-07-12 | 2019-09-17 | 中国人民解放军国防科技大学 | SBL-ADMM-based sparse aperture ISAR imaging method |
CN110275166A (en) * | 2019-07-12 | 2019-09-24 | 中国人民解放军国防科技大学 | ADMM-based rapid sparse aperture ISAR self-focusing and imaging method |
CN111610522A (en) * | 2020-06-04 | 2020-09-01 | 中国人民解放军国防科技大学 | SA-ISAR imaging method for target with micro-motion component based on low-rank and sparse combined constraint |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9702971B2 (en) * | 2014-03-17 | 2017-07-11 | Raytheon Company | High-availability ISAR image formation |
CN109100718B (en) * | 2018-07-10 | 2019-05-28 | 中国人民解放军国防科技大学 | Sparse aperture ISAR self-focusing and transverse calibration method based on Bayesian learning |
CN109085589B (en) * | 2018-10-16 | 2019-04-30 | 中国人民解放军国防科技大学 | Sparse aperture ISAR imaging phase self-focusing method based on image quality guidance |
-
2020
- 2020-09-16 CN CN202010975711.6A patent/CN112099008B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110244303A (en) * | 2019-07-12 | 2019-09-17 | 中国人民解放军国防科技大学 | SBL-ADMM-based sparse aperture ISAR imaging method |
CN110275166A (en) * | 2019-07-12 | 2019-09-24 | 中国人民解放军国防科技大学 | ADMM-based rapid sparse aperture ISAR self-focusing and imaging method |
CN111610522A (en) * | 2020-06-04 | 2020-09-01 | 中国人民解放军国防科技大学 | SA-ISAR imaging method for target with micro-motion component based on low-rank and sparse combined constraint |
Non-Patent Citations (2)
Title |
---|
Fast Sparse Aperture ISAR Autofocusing and imaging via ADMM based Sparse Bayesian Learning;Shuanghui Zhang et al.;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20191211;正文全文 * |
稳健高效通用SAR图像稀疏特征增强算法;杨磊等;《电子与信息学报》;20191215(第12期);正文全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112099008A (en) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112099008B (en) | SA-ISAR imaging and self-focusing method based on CV-ADMMN | |
CN109683161B (en) | Inverse synthetic aperture radar imaging method based on depth ADMM network | |
CN110244303B (en) | SBL-ADMM-based sparse aperture ISAR imaging method | |
CN107193003B (en) | Sparse singular value decomposition scanning radar foresight imaging method | |
CN110068805B (en) | High-speed target HRRP reconstruction method based on variational Bayesian inference | |
CN104698459B (en) | Stripe SAR (specific absorption resolution) compressed sensing and imaging method for missing data | |
CN113567985B (en) | Inverse synthetic aperture radar imaging method, device, electronic equipment and storage medium | |
CN109507666B (en) | ISAR sparse band imaging method based on off-network variational Bayesian algorithm | |
CN109031299B (en) | ISAR (inverse synthetic aperture radar) translation compensation method based on phase difference under low signal-to-noise ratio condition | |
CN107831473B (en) | Distance-instantaneous Doppler image sequence noise reduction method based on Gaussian process regression | |
CN112147608A (en) | Rapid Gaussian gridding non-uniform FFT through-wall imaging radar BP method | |
Hou et al. | Sparse coding-inspired high-resolution ISAR imaging using multistage compressive sensing | |
CN112346058B (en) | Imaging method for improving signal-to-noise ratio of high-speed SAR platform based on continuous pulse coding | |
CN113608218A (en) | Frequency domain interference phase sparse reconstruction method based on back projection principle | |
CN117192548A (en) | Sparse ISAR high-resolution imaging method based on depth expansion | |
CN112965064B (en) | PCSBL-GAMP-Net-based block sparse aperture ISAR imaging method | |
CN112946644B (en) | Based on minimizing the convolution weight l1Norm sparse aperture ISAR imaging method | |
CN114895305B (en) | L-based 1 Norm regularized sparse SAR self-focusing imaging method and device | |
Tian et al. | Airborne sparse flight array SAR 3D imaging based on compressed sensing in frequency domain | |
CN116027293A (en) | Rapid sparse angle super-resolution method for scanning radar | |
CN113900099A (en) | Sparse aperture ISAR maneuvering target imaging and calibrating method | |
CN114910905A (en) | GEO satellite-machine bistatic SAR moving target intelligent imaging method under similarity constraint | |
CN108931770B (en) | ISAR imaging method based on multi-dimensional beta process linear regression | |
CN113421281A (en) | Pedestrian micromotion part separation method based on segmentation theory | |
CN114720984B (en) | SAR imaging method oriented to sparse sampling and inaccurate observation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |