CN113466864B - Rapid combined inverse-free sparse Bayes learning super-resolution ISAR imaging algorithm - Google Patents

Rapid combined inverse-free sparse Bayes learning super-resolution ISAR imaging algorithm Download PDF

Info

Publication number
CN113466864B
CN113466864B CN202110935986.1A CN202110935986A CN113466864B CN 113466864 B CN113466864 B CN 113466864B CN 202110935986 A CN202110935986 A CN 202110935986A CN 113466864 B CN113466864 B CN 113466864B
Authority
CN
China
Prior art keywords
gamma
distribution
algorithm
expressed
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110935986.1A
Other languages
Chinese (zh)
Other versions
CN113466864A (en
Inventor
何兴宇
任晓岳
刘桃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Engineering University of PLA
Original Assignee
Air Force Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Engineering University of PLA filed Critical Air Force Engineering University of PLA
Priority to CN202110935986.1A priority Critical patent/CN113466864B/en
Publication of CN113466864A publication Critical patent/CN113466864A/en
Application granted granted Critical
Publication of CN113466864B publication Critical patent/CN113466864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes
    • G01S13/9064Inverse SAR [ISAR]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/418Theoretical aspects

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to the technical field of signal processing, in particular to a rapid combined inverse-free sparse Bayesian learning super-resolution ISAR imaging algorithm, which comprises the following steps: s1, initializing a parameter gamma, wherein gamma is a non-negative random initial value; s2, initializing a parameter Z, wherein
Figure DDA0003213161410000011
S3, according to
Figure DDA0003213161410000012
And
Figure DDA0003213161410000013
calculating the mean M and the variance Sigma of the posterior probability distribution by
Figure DDA0003213161410000014
And
Figure DDA0003213161410000015
to calculate q α (alpha) and q γ (gamma); s4, according to
Figure DDA0003213161410000016
Iteratively updating a parameter Z; s5, circularly carrying out S3 and S4 until M (t) ‑M (t‑1) || F And delta is less than or equal to the preset threshold value.

Description

Rapid combined inverse-free sparse Bayes learning super-resolution ISAR imaging algorithm
Technical Field
The invention relates to the technical field of signal processing, in particular to a fast joint inverse-free sparse Bayesian learning super-resolution ISAR imaging algorithm.
Background
The target aimed by inverse synthetic aperture radar (Inverse Synthetic Aperture Radar, ISAR) imaging is generally sparse in an observation scene, namely, the target image is sparse in the whole background domain, so that the condition of sparse reconstruction is met, and imaging can be performed through a sparse reconstruction method. In general, it is difficult to meet the requirement of high resolution imaging for wide-band and long-time continuous observation of a fixed scene, so that the radar is often faced with the problem of sparse aperture imaging. Under the sparse aperture condition, the traditional imaging method can cause strong side lobes and grating lobes of the image, and the imaging effect is poor.
When a sparse reconstruction algorithm is used for imaging a moving target, the imaging effect of the algorithm capable of obtaining the most sparse solution is generally better. Tipping proposes to reconstruct the original sparse signal based on a correlation vector machine (Relevance Vector Machine, RVM) by iterative optimization through a SBL-based sample learning method. The method is based on sparse probability learning, does not need additional prior information of signals, and can easily obtain the most sparse solution of the signals, so that the SBL algorithm is widely applied to the fields of signal and image processing, pattern recognition and the like. The super-resolution ISAR imaging based on SBL is researched, ISAR images of targets are acquired by using a small number of pulses, and the SBL-based imaging method has obvious advantages in the aspects of parameter estimation and selection, image reconstruction effect and the like compared with other CS-based imaging methods.
Most sparse signal reconstruction methods are directed to one-dimensional sparse signals, which can be considered as single observation vector (Single Measurement Vector, SMV) reconstruction methods. When the two-dimensional signal processing such as image processing is carried out by adopting the methods, the two-dimensional signal is required to be vectorized into a one-dimensional signal and then reconstructed, the processing can reduce the algorithm efficiency, and the reconstruction effect of the two-dimensional sparse signal is general. The current ISAR imaging method based on CS mostly carries out vectorization operation on the reconstructed signals, and then completes the reconstruction of the signals or carries out row-by-row reconstruction on the signals. However, these methods only make use of one-dimensional sparsity of the target image, and do not make use of two-dimensional joint sparsity of the image.
Disclosure of Invention
The invention aims to provide a rapid combined inverse-free sparse Bayesian learning super-resolution ISAR imaging algorithm, which solves the problems of complex imaging algorithm and large calculated amount in the prior art.
In order to solve the technical problems, the invention adopts the following technical scheme:
a fast joint inverse-free sparse Bayesian learning super-resolution ISAR imaging algorithm is characterized by comprising the following steps:
s1, initializing a parameter gamma, wherein gamma is a non-negative random initial value;
s2, initializing a parameter Z, wherein
Figure BDA0003213161390000021
S3, according to
Figure BDA0003213161390000022
And->
Figure BDA0003213161390000023
Calculating the mean M and the variance Σ of the posterior probability distribution by +.>
Figure BDA0003213161390000024
And->
Figure BDA0003213161390000025
To calculate q α (alpha) and q γ (γ);
S4, according to
Figure BDA0003213161390000026
Iteratively updating a parameter Z;
s5, circularly carrying out S3 and S4 until M (t) -M (t-1) || F And delta is less than or equal to the preset threshold value.
Before initializing parameters, the received signal can be expressed as:
Figure BDA0003213161390000027
the distance-compressed signal is expressed as:
Figure BDA0003213161390000028
dividing the pulse repetition frequency by assuming M pulses in the coherent accumulation timeFor N doppler cells, (2) where x (τ, t) is represented as: x= [ X ] nm ] N×M Applying sparse representation theory to echo distance signal direction, the matrix form of formula (1) is expressed as: y=Φx+v (3).
In a further technical solution, a bayesian method is used to reconstruct a signal X from the formula (3), and first, two layers of prior are given to X, wherein in the first layer, X is expressed as gaussian prior distribution represented by a parameter α, namely:
Figure BDA0003213161390000029
in the second layer, it is assumed that the super parameter α obeys the Gamma distribution, that is:
Figure BDA00032131613900000210
meanwhile, the mean value of V noise in the formula (3) needs to be zero, the covariance matrix is (1/Gamma) I, and Gamma estimation is realized through iterative learning, and meanwhile, V is required to be assumed to obey Gamma distribution, namely:
p(γ)=Gamma(γ|c,d)=Γ(c) -1 d c γ c-1 e -dγ (4);
order the
Figure BDA0003213161390000031
Representing hidden variables of the hierarchical model in the formula (4), the variation distribution can be represented as:
q(θ)=q X (X)q α (α)q γ (γ);
let q be again X The iterative update of (X) obeys a gaussian distribution function, combines a likelihood function and a priori distribution, and the j-th column posterior probability density function of X can be expressed as:
Figure BDA0003213161390000032
the further technical scheme is that the solution of (5) is carried out by maximizing the unconstrained evidence lower bound, and the form of the evidence lower bound is as follows:
Figure BDA0003213161390000033
theorem is introduced: order the
Figure BDA0003213161390000034
Represents a continuous microtifunction and there are Lipschitz constants and Lipschitz continuous gradients for arbitrary +.>
Figure BDA0003213161390000035
And T.gtoreq.T (f), the following inequality holds:
Figure BDA0003213161390000036
and (3) combining the formula (6) and the formula (7) to obtain an unconstrained evidence lower bound, wherein the unconstrained evidence lower bound is expressed as follows:
Figure BDA0003213161390000037
the unconstrained evidence lower bound in equation (8) may be further expressed as:
Figure BDA0003213161390000038
the unconstrained evidence lower bound is then maximized by using a variational expectation maximization algorithm
Figure BDA0003213161390000041
In E-step, assuming that the other variables are constants, the posterior distribution function for each hidden variable is calculated, in M-step, q (θ) is fixed, and ≡is maximized>
Figure BDA0003213161390000042
Function about Z.
Further technical proposal is that the E-step calculation process comprises q X Iterative update of (X), q α Loop iteration sum q of (alpha) γ And (gamma) loop iteration.
A further technical proposal is that q is as follows X In the iterative update of (X), the posterior probability distribution q X The iterative update of (X) is expressed as:
Figure BDA0003213161390000043
wherein,,
Figure BDA0003213161390000044
n >representing alpha n Regarding q α The expectation of (α) can be derived from equation (9) to yield q X (X) obeys a gaussian distribution whose mean and variance are respectively:
Figure BDA0003213161390000045
and->
Figure BDA0003213161390000046
A further technical proposal is that q is as follows α In cyclic iteration of (α), posterior distribution q α (α) is expressed as:
Figure BDA0003213161390000047
wherein,,
Figure BDA0003213161390000048
is->
Figure BDA0003213161390000049
Regarding q X (X) expectation, X nl The first element of line n in X, i.e., α satisfies the Gamma distribution in S3 +.>
Figure BDA00032131613900000410
Wherein the method comprises the steps of
Figure BDA00032131613900000411
A further technical proposal is that q is as follows γ (gamma) iteration of the loop, for q γ The variation optimization of (γ) can be obtained:
Figure BDA0003213161390000051
namely, gamma satisfies Gamma distribution in S3
Figure BDA0003213161390000052
Wherein the method comprises the steps of
Figure BDA0003213161390000053
In a further technical scheme, q (theta; Z) old ) Carry-in
Figure BDA0003213161390000054
The +.about.in S4 can then be obtained by optimization problems>
Figure BDA0003213161390000055
Let the gradient of the above formula with respect to Z be zero, it is possible to obtain:
Figure BDA0003213161390000056
(10) TI-2Φ in T Phi is a positive definite matrix and satisfies T>T(f)=2λ maxT Φ), wherein λ is maxT Φ) represents Φ T Maximum eigenvalue of Φ.
Compared with the prior art, the invention has the beneficial effects that: in the scheme, although the inverse of the N multiplied by N matrix still needs to be calculated in the process of iteratively updating the variance sigma, the matrix inversion operation at the moment is applied to the diagonal matrix, the inverse can be rapidly obtained, and the matrix inversion operation with high calculation complexity can be avoided by the algorithm, so that the calculation amount of the algorithm is greatly reduced.
Drawings
Fig. 1 is a graph of reconstructed MSEs for each algorithm at different sparsities K, with other parameters set as follows: m=250, n=500, l=20.
Fig. 2 is a graph of the reconstructed MSE for each algorithm for different parameters M, where the other parameters are set as follows: n=500, l=20, k=120.
Fig. 3 is a graph of reconstructed MSEs for various algorithms at different signal-to-noise ratios.
Fig. 4 is an average operation time chart of each algorithm under different parameters N, and other parameters are set as follows: m=n/2,K =n/10, l=20, snr=20 dB.
FIG. 5 is a graph of the result of super-resolution ISAR imaging of B727 data by the PC-SBL algorithm.
FIG. 6 is a graph showing the results of super-resolution ISAR imaging of B727 data by the M-FOCUSS algorithm.
FIG. 7 is a graph of the results of super-resolution ISAR imaging of B727 data by the M-SBL algorithm.
FIG. 8 is a graph of the result of super-resolution ISAR imaging of B727 data by the algorithm of the present invention.
FIG. 9 is a MSE plot of super-resolution ISAR imaging of B727 for different algorithms.
FIG. 10 is a graph showing the time contrast of super-resolution ISAR imaging of B727 by different algorithms.
FIG. 11 is a graph showing the results of super-resolution ISAR imaging of Yak-42 by the PC-SBL algorithm.
FIG. 12 is a graph showing the results of super-resolution ISAR imaging of Yak-42 by the M-FOCUSS algorithm.
FIG. 13 is a graph showing the results of the M-SBL algorithm on the super-resolution ISAR imaging of Yak-42.
FIG. 14 is a graph showing the results of the algorithm of the present invention on the super-resolution ISAR imaging of Yak-42.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Examples:
fig. 1-14 show a preferred implementation manner of the fast joint inverse-free sparse bayesian learning super-resolution ISAR imaging algorithm according to the present invention, where the fast joint inverse-free sparse bayesian learning super-resolution ISAR imaging algorithm in this embodiment specifically includes the following steps:
s1, initializing a parameter gamma, wherein gamma is a non-negative random initial value;
s2, initializing a parameter Z, wherein
Figure BDA0003213161390000061
S3, according to
Figure BDA0003213161390000062
And->
Figure BDA0003213161390000063
Calculating the mean M and the variance Σ of the posterior probability distribution by +.>
Figure BDA0003213161390000064
And->
Figure BDA0003213161390000065
To calculate q α (alpha) and q γ (γ);
S4, according to
Figure BDA0003213161390000066
Iteratively updating a parameter Z;
s5, circularly carrying out S3 and S4 until M (t) -M (t-1) || F And delta is less than or equal to the preset threshold value.
Before initializing parameters, the radar is assumed to transmit linear radio frequency signals, and the received signals can be expressed as:
Figure BDA0003213161390000067
the distance-compressed signal is expressed as:
Figure BDA0003213161390000071
wherein c represents the propagation velocity of electromagnetic wave, ω 0 R is the rotation angular velocity of uniform rotation 0 Representing the distance from the center of the rotating shaft to the radar, wherein tau is the fast time, t m Is slow time, T p Representing pulse width, f c For carrier frequency, μ represents tone frequency. B represents the bandwidth of the transmitted signal, A k Representing the kth scattering center P k (x k ,y k ) Scattering coefficient T of (1) a The coherent integration time is represented, and K represents the number of scattering points.
Assuming that the number of pulses in the coherent accumulation time is M, the pulse repetition frequency is divided into N doppler cells, and the expression of x (τ, t) in the expression (2) is: x= [ X ] nm ] N×M Applying sparse representation theory to echo distance signal direction, the matrix form of formula (1) is expressed as: y=Φx+v (3).
Wherein,,
Figure BDA0003213161390000072
in the form of a matrix of y (τ, t) in formula (1)>
Figure BDA0003213161390000073
Representing noise matrix, sparse dictionary Φ N×N Can be expressed as:
Figure BDA0003213161390000074
if it is
Figure BDA0003213161390000075
Then->
Figure BDA0003213161390000076
Representing a super-resolution range profile.
Matrix X T Can be expressed in the following form: x is X T =FA;
Wherein a= [ a ] mn ]Representation of
Figure BDA0003213161390000077
Is a two-dimensional super-resolution ISAR image of the target, and the element values in the matrix represent the scattering amplitudes of the scattering points. Parameter->
Figure BDA0003213161390000078
And->
Figure BDA0003213161390000079
Representing super resolution multiple of range profile, parameter +.>
Figure BDA00032131613900000710
And
Figure BDA00032131613900000711
super-resolution multiples of azimuth are shown. />
Figure BDA00032131613900000712
Representing a partial fourier matrix, namely:
Figure BDA00032131613900000713
after the joint reconstruction of the range profile is completed, the X is solved by a traditional CS reconstruction method T Reconstruction problem of FA, to obtain an ISAR image of the target. Assuming that the noise V obeys a mean of 0 and the variance is sigma 2 I, multivariate gaussian distribution.
Reconstructing a signal X from the formula (3) by using a Bayesian method, firstly endowing X with two layers of prior, and expressing X as Gaussian prior distribution characterized by a parameter alpha in a first layer, namely:
Figure BDA0003213161390000081
wherein,,
Figure BDA0003213161390000082
to control the non-negative hyper-parameters of a priori variance of each row of X, X (n) Represents the nth row of X.
In the second layer, it is assumed that the super parameter α obeys the Gamma distribution, that is:
Figure BDA0003213161390000083
wherein,,
Figure BDA0003213161390000084
representing a Gamma function. The parameter b is typically chosen to be a very small value, such as 10 -4 In contrast, parameter a is typically chosen to be a larger value, typically chosen as aε [0,1 ]]。
Meanwhile, the mean value of V noise in the formula (3) needs to be zero, the covariance matrix is (1/Gamma) I, and Gamma estimation is realized through iterative learning, and meanwhile, V is required to be assumed to obey Gamma distribution, namely:
p(γ)=Gamma(γ|c,d)=Γ(c) -1 d c γ c-1 e -dγ (4);
order the
Figure BDA0003213161390000085
Representing hidden variables of the hierarchical model in the formula (4), the variation distribution can be represented as:
q(θ)=q X (X)q α (α)q γ (γ);
let q be again X The iterative update of (X) obeys a gaussian distribution function, combines a likelihood function and a priori distribution, and the j-th column posterior probability density function of X can be expressed as:
Figure BDA0003213161390000086
wherein the mean and variance can be expressed as follows:
Figure BDA0003213161390000087
and->
Figure BDA0003213161390000088
Wherein,,
Figure BDA0003213161390000089
Σ t =γI+ΦDΦ T from equation 2, it can be seen that in the iteration of the posterior distribution based on the M-SBL algorithm, the inverse of one M x M matrix needs to be calculated each time. Thus, the computational complexity of the variational M-SBL algorithm is approximately O (M 3 ). Such high computational complexity can limit its application to many problems requiring handling large amounts of data.
In the scheme, the (5) solution is carried out by maximizing the unconstrained evidence lower bound, and the evidence lower bound is in the form of:
Figure BDA0003213161390000091
theorem is introduced: order the
Figure BDA0003213161390000092
Represents a continuous microtifunction and there are Lipschitz constants and Lipschitz continuous gradients for arbitrary +.>
Figure BDA0003213161390000093
And T.gtoreq.T (f), the following inequality holds:
Figure BDA0003213161390000094
according to the above theorem, the lower bound of p (y|x, γ) can be found by:
Figure BDA0003213161390000095
wherein,,
Figure BDA0003213161390000096
when the inequality relation in the formula (a) holds for any Z, and when z=x, the inequality in the formula becomes an equation.
And (3) combining the formula (6) and the formula (7) to obtain an unconstrained evidence lower bound, wherein the unconstrained evidence lower bound is expressed as follows:
Figure BDA0003213161390000097
wherein,,
Figure BDA0003213161390000098
the unconstrained evidence lower bound in equation (8) may be further expressed as:
Figure BDA0003213161390000099
wherein,,
Figure BDA00032131613900000910
h (Z) can be expressed as:
Figure BDA0003213161390000101
the unconstrained evidence lower bound is then maximized by using a variational expectation maximization (expectation maximization, EM) algorithm
Figure BDA0003213161390000102
In E-step, assuming that the other variables are constants, the posterior distribution function for each hidden variable is calculated, in M-step, q (θ) is fixed, and ≡is maximized>
Figure BDA0003213161390000103
Function about Z.
The E-step calculation process includes q X Iterative update of (X), q α Loop iteration sum q of (alpha) γ And (gamma) loop iteration.
q X In the iterative update of (X), the posterior probability distribution q X The iterative update of (X) is expressed as:
Figure BDA0003213161390000104
wherein,,
Figure BDA0003213161390000105
n >representing alpha n Regarding q α The expectation of (α) can be derived from equation (9) to yield q X (X) obeys a gaussian distribution whose mean and variance are respectively:
Figure BDA0003213161390000106
and->
Figure BDA0003213161390000107
Where < γ > represents the desire for γ.
q α In cyclic iteration of (α), posterior distribution q α (α) is expressed as:
Figure BDA0003213161390000108
wherein,,
Figure BDA0003213161390000109
is->
Figure BDA00032131613900001010
Regarding q X (X) expectation, X nl Represents the first element of the nth row in X, i.e., alpha satisfies S3Gamma distribution->
Figure BDA00032131613900001011
Wherein the method comprises the steps of
Figure BDA00032131613900001012
q γ (gamma) iteration of the loop, for q γ The variation optimization of (γ) can be obtained:
Figure BDA0003213161390000111
namely, gamma satisfies Gamma distribution in S3
Figure BDA0003213161390000112
Wherein the method comprises the steps of
Figure BDA0003213161390000113
In summary, E-step mainly implements iterative updating of posterior probability distributions of hidden variables X, α and γ. In the iterative update process, part of the parameters are given by:
Figure BDA0003213161390000114
Figure BDA0003213161390000115
wherein Tr (A) represents the trace of matrix A, Σ n,n Represents the nth diagonal element of the matrix Σ.
In M-step, q (θ; Z) old ) Carry-in
Figure BDA0003213161390000116
The optimization problem can be obtained in S4
Figure BDA0003213161390000117
Let the gradient of the above formula with respect to Z be zero, it is possible to obtain:
Figure BDA0003213161390000118
(10) The derivation is true because of TI-2Φ T Phi is a positive definite matrix and satisfies T>T(f)=2λ maxT Φ), wherein λ is maxT Φ) represents Φ T Maximum eigenvalue of Φ.
In summary, the observation data is assumed to be Y, and the structured sparse dictionary is Φ.
In the scheme, although the inverse of the N multiplied by N matrix still needs to be calculated in the process of iteratively updating the variance sigma, the matrix inversion operation at the moment is applied to the diagonal matrix, the inverse can be rapidly obtained, and the matrix inversion operation with high calculation complexity can be avoided by the algorithm, so that the calculation amount of the algorithm is greatly reduced.
The advantages of the algorithm are demonstrated by several experiments:
experimental example one:
the effectiveness of the algorithm of the scheme is verified by using the simulation signal. The parameters a, b, u and v are all set to 10 -6 . Setting the parameter T to a value slightly greater than the Lipschitz constant, such that t=λ max (2Φ T Φ)+10 -6 . In experiments, the reconstruction performance of the algorithm of the scheme is compared with that of a PC-SBL algorithm, a continuous block original dual active set algorithm (group primal dual active set with continuation algorithm, GPDASC algorithm), an M-FOCUSS algorithm, an M-SBL algorithm and a T-MSBL algorithm. In the simulation experiment, the L sparse vectors of the original data all contain K non-zero elements, and the non-zero element positions of the vectors are the same. At the same time, the non-zero element positions are random, and the amplitudes thereof follow a standard normal distribution. Matrix array
Figure BDA0003213161390000121
Is a Gaussian random matrix, and each vector element in the matrix obeys standard Gaussian distribution of independent same distribution. The experiments are all 100 Monte Carlo simulation results.
The reconstruction performance of each algorithm is quantitatively analyzed by means of a mean square error, which is defined as MSE= |R-R 0 || F /||R 0 || F Wherein R is 0 Representing a full aperture ISAR image data matrix obtained using the RD algorithm, R representing a sparse aperture imaging data matrix obtained using three different other algorithms.
As can be seen in fig. 1-3, the PC-SBL algorithm and the GPDASC algorithm are relatively inefficient to reconstruct, mainly because both algorithms require that each vector of the signal be block sparse and require knowledge of part of the a priori information. The reconstruction effect of other algorithms is similar, and the original signal can be better reconstructed. Here, the CPU average calculation time is also used to measure the calculation complexity of each algorithm. As can be seen from fig. 4, the algorithm of the present solution avoids matrix inversion operation, and has the lowest calculation complexity and the shortest calculation time. Therefore, the algorithm reconstruction effect and the operation complexity are comprehensively considered, and the algorithm performance of the scheme is best.
Experimental example two:
ISAR imaging experiments are carried out by using the wave sound 727 simulation data to verify the effectiveness of the algorithm of the scheme, the radar transmits linear frequency modulation signals, the carrier frequency is 9GHz, the signal bandwidth is 150MHz, and the pulse repetition frequency is 20KHz. The values of the setting parameters a, b, u, v and T are consistent with those in experiment one,
Figure BDA0003213161390000122
is a partial fourier matrix. Firstly, ISAR super-resolution imaging is realized by 128 pulses, as shown in fig. 5-8, because the reconstruction error is larger when the T-MSBL algorithm reconstructs a complex signal, and the GPDASC algorithm mainly aims at the reconstruction problem of a block sparse signal, the reconstructed ISAR image results of the two algorithms are not continuously given here, and only the reconstructed ISAR image results are compared with the other three algorithms in experimental example I.
As can be seen from the figures 5-8, compared with other algorithms, the algorithm reconstruction of the scheme has the advantages that the ISAR image resolution is higher, and the focusing performance is good. In order to quantitatively analyze the imaging effect of each algorithm, the reconstructed mean square error of each algorithm is also calculated. As can be seen in fig. 9-10 (all 200 monte carlo experimental results), the reconstructed ISAR image MSE of the algorithm of the present scheme is minimal. In addition, compared with the average operation time of each algorithm, the algorithm operation time of the scheme is shortest, and the calculation complexity of the algorithm of the scheme is proved to be smaller than that of other algorithms.
Experimental example three:
the effectiveness of the algorithm is verified by using the Yak-42 measured data, the radar also transmits a linear frequency modulation signal, the center frequency is 10GHz, the signal bandwidth is 400MHz, the pulse repetition frequency is 100Hz, and 256 pulses are acquired within the coherent accumulation time of 2.56 s. The values of parameters a, b, u, v and T were consistent with the settings of the parameters in experiment one.
Figure BDA0003213161390000131
Is a partial fourier matrix. As shown in FIGS. 11-14, ISAR high resolution imaging is achieved with 128 pulses.
Compared with other algorithms, the ISAR image obtained by the algorithm reconstruction of the scheme has higher resolution and good focusing performance. The imaging effect of each algorithm is quantitatively analyzed by comparing and analyzing the reconstructed mean square error of each algorithm, and as can be seen from fig. 11-14, the reconstructed ISAR image MSE of the algorithm of the scheme is minimum, the average operation time of each algorithm is compared, and the result is also 200 Monte Carlo experimental results, and as can be seen, the operation time required by the algorithm of the scheme is shortest compared with other Bayesian learning algorithms, and the superiority of the algorithm of the scheme in calculation complexity is further proved.
Although the invention has been described herein with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the scope and spirit of the principles of this disclosure. More specifically, various variations and modifications may be made to the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, drawings and claims of this application. In addition to variations and modifications in the component parts and/or arrangements, other uses will be apparent to those skilled in the art.

Claims (3)

1. A fast joint inverse-free sparse Bayesian learning super-resolution ISAR imaging algorithm is characterized by comprising the following steps:
s1, initializing a parameter gamma, wherein gamma is a non-negative random initial value;
s2, initializing a parameter Z, wherein
Figure FDA0004245245690000011
S3, according to
Figure FDA0004245245690000012
And->
Figure FDA0004245245690000013
Calculating the mean M and the variance Σ of the posterior probability distribution by +.>
Figure FDA0004245245690000014
And->
Figure FDA0004245245690000015
To calculate q α (alpha) and q γ (γ);
S4, according to
Figure FDA0004245245690000016
Iteratively updating a parameter Z;
s5, circularly carrying out S3 and S4 until M (t) -M (t-1) || F Delta is less than or equal to delta, wherein delta is a preset threshold value;
before initializing parameters, the radar is assumed to transmit linear radio frequency signals, and the received signals can be expressed as:
Figure FDA0004245245690000017
applying sparse representation theory to echo distance signal direction, and (1) expressing the matrix form of formula as: y=Φx+v (3);
suppose V obeys the Gamma distribution, namely:
p(γ)=Gamma(γ|c,d)=Γ(c) -1 d c γ c-1 e -dγ (4);
order the
Figure FDA0004245245690000018
Representing hidden variables of the hierarchical model in the formula (4), the variation distribution can be represented as:
q(θ)=q X (X)q α (α)q γ (γ);
let q be again X The iterative update of (X) obeys a gaussian distribution function, combines a likelihood function and a priori distribution, and the j-th column posterior probability density function of X can be expressed as:
Figure FDA0004245245690000019
solving (5) by maximizing unconstrained lower bound of evidence in the form of:
Figure FDA0004245245690000021
theorem is introduced: order the
Figure FDA0004245245690000022
Represents a continuous microtifunction and there are Lipschitz constants and Lipschitz continuous gradients for arbitrary +.>
Figure FDA0004245245690000023
And T.gtoreq.T (f), the following inequality holds:
Figure FDA0004245245690000024
and (3) combining the formula (6) and the formula (7) to obtain an unconstrained evidence lower bound, wherein the unconstrained evidence lower bound is expressed as follows:
Figure FDA0004245245690000025
the unconstrained evidence lower bound in equation (8) may be further expressed as:
Figure FDA0004245245690000026
the unconstrained evidence lower bound is then maximized by using a variational expectation maximization algorithm
Figure FDA0004245245690000027
In E-step, assuming that the other variables are constants, the posterior distribution function for each hidden variable is calculated, in M-step, q (θ) is fixed, and ≡is maximized>
Figure FDA0004245245690000028
A function about Z;
the E-step calculation process includes q X Iterative update of (X), q α Loop iteration sum q of (alpha) γ (γ) loop iteration; the q is X In the iterative update of (X), the posterior probability distribution q X The iterative update of (X) is expressed as:
Figure FDA0004245245690000029
wherein,,
Figure FDA00042452456900000210
n >representing alpha n Regarding q α The expectation of (α) can be derived from equation (9) to yield q X (X) obeys a gaussian distribution whose mean and variance are respectively:
Figure FDA0004245245690000031
and->
Figure FDA0004245245690000032
The q is α In cyclic iteration of (α), posterior distribution q α (α) is expressed as:
Figure FDA0004245245690000033
wherein,,
Figure FDA0004245245690000034
is->
Figure FDA0004245245690000035
Regarding q X (X) expectation, X nl The first element of line n in X, i.e., α satisfies the Gamma distribution in S3 +.>
Figure FDA0004245245690000036
Wherein the method comprises the steps of
Figure FDA0004245245690000037
The q is γ (gamma) iteration of the loop, for q γ The variation optimization of (γ) can be obtained:
Figure FDA0004245245690000038
namely, gamma satisfies Gamma distribution in S3
Figure FDA0004245245690000039
Wherein the method comprises the steps of
Figure FDA00042452456900000310
In the M-step, q (θ; Z) old ) Carry-in
Figure FDA00042452456900000311
The optimization problem can be obtained in S4
Figure FDA00042452456900000312
Let the gradient of the above formula with respect to Z be zero, it is possible to obtain:
Figure FDA00042452456900000313
(10) TI-2Φ in T Phi is a positive definite matrix and satisfies T > T (f) =2λ maxT Φ), wherein λ is maxT Φ) represents Φ T Maximum eigenvalue of Φ.
2. The rapid joint inverse free sparse bayesian learning super-resolution ISAR imaging algorithm according to claim 1, wherein: the distance-compressed signal is expressed as:
Figure FDA0004245245690000041
assuming that the number of pulses in the coherent accumulation time is M, dividing the pulse repetition frequency into N doppler cells, the expression of x (τ, t) in the expression (2) is: x= [ X ] nm ] N×M
3. The rapid joint inverse free sparse bayesian learning super-resolution ISAR imaging algorithm according to claim 2, wherein: reconstructing a signal X from the formula (3) by using a Bayesian method, firstly endowing X with two layers of prior, and expressing X as Gaussian prior distribution characterized by a parameter alpha in a first layer, namely:
Figure FDA0004245245690000042
in the second layer, it is assumed that the super parameter α obeys the Gamma distribution, that is:
Figure FDA0004245245690000043
meanwhile, the mean value of V noise in the formula (3) is zero, the covariance matrix is (1/gamma) I, and gamma estimation is realized through iterative learning.
CN202110935986.1A 2021-08-16 2021-08-16 Rapid combined inverse-free sparse Bayes learning super-resolution ISAR imaging algorithm Active CN113466864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110935986.1A CN113466864B (en) 2021-08-16 2021-08-16 Rapid combined inverse-free sparse Bayes learning super-resolution ISAR imaging algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110935986.1A CN113466864B (en) 2021-08-16 2021-08-16 Rapid combined inverse-free sparse Bayes learning super-resolution ISAR imaging algorithm

Publications (2)

Publication Number Publication Date
CN113466864A CN113466864A (en) 2021-10-01
CN113466864B true CN113466864B (en) 2023-07-04

Family

ID=77866764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110935986.1A Active CN113466864B (en) 2021-08-16 2021-08-16 Rapid combined inverse-free sparse Bayes learning super-resolution ISAR imaging algorithm

Country Status (1)

Country Link
CN (1) CN113466864B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114706217B (en) * 2022-06-06 2022-11-11 西安电子科技大学 Maneuvering platform forward-looking super-resolution imaging method based on sparse Bayesian learning framework
CN116679301B (en) * 2023-07-28 2023-10-20 西安电子科技大学 Method for rapidly reconstructing target range profile of broadband radar in super-resolution mode

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2429138A1 (en) * 2010-09-07 2012-03-14 Technische Universität Graz Method for the determination of the number of superimposed signals using variational bayesian inference
WO2018039904A1 (en) * 2016-08-30 2018-03-08 深圳大学 Block sparse compressive sensing based infrared image reconstruction method and system thereof
CN110161499A (en) * 2019-05-09 2019-08-23 东南大学 Scattering coefficient estimation method is imaged in improved management loading ISAR
CN110596645A (en) * 2019-09-10 2019-12-20 中国人民解放军国防科技大学 Two-dimensional inversion-free sparse Bayesian learning rapid sparse reconstruction method
CN113030972A (en) * 2021-04-26 2021-06-25 西安电子科技大学 Maneuvering target ISAR imaging method based on rapid sparse Bayesian learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2429138A1 (en) * 2010-09-07 2012-03-14 Technische Universität Graz Method for the determination of the number of superimposed signals using variational bayesian inference
WO2018039904A1 (en) * 2016-08-30 2018-03-08 深圳大学 Block sparse compressive sensing based infrared image reconstruction method and system thereof
CN110161499A (en) * 2019-05-09 2019-08-23 东南大学 Scattering coefficient estimation method is imaged in improved management loading ISAR
CN110596645A (en) * 2019-09-10 2019-12-20 中国人民解放军国防科技大学 Two-dimensional inversion-free sparse Bayesian learning rapid sparse reconstruction method
CN113030972A (en) * 2021-04-26 2021-06-25 西安电子科技大学 Maneuvering target ISAR imaging method based on rapid sparse Bayesian learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
低空小型无人机贝叶斯学习超分辨ISAR成像;刘明昊;徐久;赵付成龙;程凯飞;杨磊;;雷达科学与技术(第03期);全文 *

Also Published As

Publication number Publication date
CN113466864A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
Zhang et al. Resolution enhancement for large-scale real beam mapping based on adaptive low-rank approximation
CN110068805B (en) High-speed target HRRP reconstruction method based on variational Bayesian inference
CN113466864B (en) Rapid combined inverse-free sparse Bayes learning super-resolution ISAR imaging algorithm
CN110726992B (en) SA-ISAR self-focusing method based on structure sparsity and entropy joint constraint
Wang et al. Sparse representation-based ISAR imaging using Markov random fields
CN108226928B (en) Inverse synthetic aperture radar imaging method based on expected propagation algorithm
Zhang et al. Resolution enhancement for ISAR imaging via improved statistical compressive sensing
Zhao et al. Structured sparsity-driven autofocus algorithm for high-resolution radar imagery
CN112859075B (en) Multi-band ISAR fusion high-resolution imaging method
Lorintiu et al. Compressed sensing Doppler ultrasound reconstruction using block sparse Bayesian learning
CN112147608A (en) Rapid Gaussian gridding non-uniform FFT through-wall imaging radar BP method
Wu et al. SAR imaging from azimuth missing raw data via sparsity adaptive StOMP
CN115453528A (en) Method and device for realizing segmented observation ISAR high-resolution imaging based on rapid SBL algorithm
Ge et al. Sparse logistic regression based one-bit sar imaging
Wei et al. Wide angle SAR subaperture imaging based on modified compressive sensing
Rao et al. Comparison of parametric sparse recovery methods for ISAR image formation
Wei et al. Learning-based split unfolding framework for 3-D mmW radar sparse imaging
Nazari et al. High‐dimensional sparse recovery using modified generalised sl0 and its application in 3d ISAR imaging
CN113466865B (en) Combined mode coupling sparse Bayesian learning super-resolution ISAR imaging algorithm
CN108931770B (en) ISAR imaging method based on multi-dimensional beta process linear regression
CN111044996A (en) LFMCW radar target detection method based on dimension reduction approximate message transfer
Wei et al. Multi-angle SAR sparse image reconstruction with improved attributed scattering model
CN115453527A (en) Periodic sectional observation ISAR high-resolution imaging method
Cheng et al. A fast ISAR imaging method based on strategy weighted CAMP algorithm
Zhu et al. Scene segmentation of multi-band ISAR fusion imaging based on MB-PCSBL

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant