CN110780298B - Multi-base ISAR fusion imaging method based on variational Bayes learning - Google Patents

Multi-base ISAR fusion imaging method based on variational Bayes learning Download PDF

Info

Publication number
CN110780298B
CN110780298B CN201911058800.8A CN201911058800A CN110780298B CN 110780298 B CN110780298 B CN 110780298B CN 201911058800 A CN201911058800 A CN 201911058800A CN 110780298 B CN110780298 B CN 110780298B
Authority
CN
China
Prior art keywords
radar
observation
matrix
vector
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911058800.8A
Other languages
Chinese (zh)
Other versions
CN110780298A (en
Inventor
白雪茹
赵志强
祁浩凡
周峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201911058800.8A priority Critical patent/CN110780298B/en
Publication of CN110780298A publication Critical patent/CN110780298A/en
Application granted granted Critical
Publication of CN110780298B publication Critical patent/CN110780298B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a multi-base ISAR fusion imaging method based on a variational Bayesian learning algorithm, which mainly solves the problems of multi-base ISAR image fusion and complex estimation of target rotation parameters and radar observation visual angle difference under the condition of low signal-to-noise ratio in the prior art. The scheme comprises the following steps: 1) Receiving two radar echoes and performing range direction pulse compression; 2) Vectorizing and splicing echoes after pulse compression in the range direction to obtain an observation vector; 3) Constructing dictionary matrixes of two radars; 4) Constructing a fusion imaging model according to the observation vector and dictionary matrixes of the two radars; 5) Estimating a target rotation angle and a second radar observation angle; 6) And solving the fused scattering coefficient vector in the fusion imaging model to obtain a fusion imaging result. The method is simple in estimation, can obtain the optimal estimation of the target rotation parameters and the radar observation visual angle difference, realizes high-resolution fusion imaging of the target under the condition of low signal-to-noise ratio, and can be used for extracting and identifying the shape characteristics of the target.

Description

Multi-base ISAR fusion imaging method based on variational Bayes learning
Technical Field
The invention belongs to the technical field of radars, and further relates to a multi-base ISAR high-resolution fusion imaging method which can be used for extracting and identifying target shape features.
Background
With the rapid development of inverse synthetic aperture radar ISAR, although the existing imaging radar can provide a higher resolution, when observing space targets such as space debris, small satellites, and spacecraft, a two-dimensional radar image with a higher resolution needs to be obtained to accurately describe the characteristics of the space targets. The existing research shows that: the target high-resolution image observed in a large range from multiple visual angles can effectively improve the reliability of target identification, and meanwhile, the target observation data under different visual angles are fused to improve the resolution of an imaging result, so that a foundation is laid for high-performance target identification. Generally speaking, there are two ways to obtain a wide-range multi-view observation of a rotating table target, the first way is to obtain a large-view observation through a long-time observation by a single radar receiver, and the second way is to obtain a large-view observation in a short time by configuring multiple radar receivers located at different positions. As a basic object model, a turntable object model is usually built on a short time interval basis for modeling an actual non-cooperative moving object, such as an airplane, a ship, and the like. Because the motion characteristics of the object may change significantly over a long observation time, the translation compensation process becomes difficult. Therefore, in practical applications, the imaging system mostly adopts the second observation method at the cost of hardware increase, i.e. a spatially separated multi-receiver imaging system. Therefore, the research on multi-base ISAR high-resolution fusion imaging is of great significance.
A dual-view ISAR Fusion imaging method based on matrix Fourier transform is proposed in a paper published by Zhixi Li, scott Papson of Data-Level Fusion of multiple look investors Synthetic Aperture radio Images (IEEE Transactions on Geoscience and Remote Sensing,2008,46 (5): 1394-1406). The method comprises the following implementation steps: constructing a double-view ISAR observation geometric model and an echo signal model; determining a dual-view ISAR data fusion rule; and solving the fused image by utilizing matrix Fourier transform. However, the method only considers the ISAR fusion imaging problem of the ideal turntable model under different viewing angles, omits the estimation of target rotation parameters and different ISAR observation viewing angle differences, and is not suitable for imaging of non-cooperative moving targets.
Invention patent publication No. filed by the university of qinghua: 101685154A, application No.: 200810223234.7 discloses a method for double/multiple base inverse synthetic aperture radar ISAR image fusion, which comprises the following specific steps: equally dividing target signals received by each base radar to obtain two range-Doppler images with equal resolution; scattering points of all Doppler images are extracted and correlated, and the view angle difference, the target rotating speed and the equivalent rotating center of different radars are estimated; and obtaining a target fusion imaging result by using a convolution-inverse projection method. However, this method needs to extract the position of the scattering point, and in the case of unfavorable conditions such as noise, the final imaging result is affected.
Disclosure of Invention
The invention aims to provide a multi-base ISAR high-resolution fusion imaging method based on a variational Bayesian learning algorithm, so as to realize accurate imaging of a target under the conditions of low signal-to-noise ratio, target rotation parameters and unknown observation visual angle differences of different radars, and finally obtain a two-dimensional ISAR image with good focus.
The basic idea of the invention is as follows: based on a compressive sensing theory, the multi-base ISAR high-resolution fusion imaging problem is converted into a sparse signal expression problem, the unknown parameter estimation problem and the fusion imaging problem are jointly solved, and a fused high-resolution two-dimensional ISAR image is obtained while the optimal estimation value of the unknown parameter is obtained. The implementation scheme comprises the following steps:
(1) Recording echo signal S of first radar by inverse synthetic aperture radar ISAR 1 And echo signal S of the second radar 2 ,S 1 And S 2 All dimensions of (are N) r ×N a In which N is r Number of sampling points in the direction of distance, N a Sampling points in the azimuth direction;
(2) Echo signals S for two radars 1 And S 2 Respectively compressing the range direction pulse to obtain a first radar echo signal S after the range direction pulse compression 1 ' and a second radar echo signal S 2 '; and combines the two echo signals S 1 ' and S 2 ' splicing into two column vectors Y respectively according to columns 1 And Y 2 ,Y 1 And Y 2 Are each N × 1, where N = N r ×N a
(3) Combining the two column vectors Y 1 And Y 2 Connected according to columns to obtain observation vectors
Figure BDA0002257297740000021
The observation vector has dimensions M × 1, where M =2N;
(4) Equally dividing an imaging scene into K sections along the x direction, wherein the length of each section is A; equally dividing the image into L sections along the y direction, wherein the length of each section is R, the scattering coefficient of the imaging region is expressed as a matrix omega, and splicing the scattering coefficient matrix into column vectors sigma according to columns, wherein the dimension of omega is KxL, the dimension of sigma is Qx1, and Q = KxL;
(5) Constructing a dictionary matrix Φ for the first part of the radar 1 And dictionary matrix Φ of second part radar 2 And combining the dictionary matrix phi of the two radars 1 And phi 2 Splicing according to columns to obtain a dictionary matrix corresponding to the observation vector with dimension of Mx 1
Figure BDA0002257297740000022
Wherein the dimension of phi is MxQ, phi 1 And phi 2 Dimension of (a)Are all NxQ;
(6) Constructing a fusion imaging model according to the observation vector Y and the dictionary matrix phi: y = Φ σ + n, where n is a noise vector with dimension M × 1;
(7) Setting a rotation angle theta and a second radar observation visual angle beta 2 And optimizing the same to obtain an optimal estimation value
Figure BDA0002257297740000031
And &>
Figure BDA0002257297740000032
(7a) Setting the initial estimation interval of the rotation angle theta as [ theta ] minmax ]The estimated step length is delta theta, and the second radar observation angle beta 2 The initial estimation interval is [ beta ] 2min2max ]Estimate step size as Δ β 2 The initial iteration number i =1;
(7b) According to the rotation angle theta and the second radar observation visual angle beta set by the step (7 a) 2 Calculating the value theta of the rotation angle theta of the ith iteration by using the estimation interval and the estimation step length i =θ min + i Δ θ and second radar observation angle β 2 Value of beta 2 i =β 2min +iΔβ 2
(7c) Constructing a dictionary matrix phi corresponding to the ith iterative fusion imaging model, and solving a scattering coefficient vector sigma corresponding to the ith iterative fusion imaging model;
(7d) Calculating the mean square error of the ith iteration observation vector Y and the product of the dictionary matrix phi and the scattering coefficient vector sigma:
Figure BDA0002257297740000033
and recording the corresponding value theta of the rotation angle theta i And a second radar view angle beta 2 Value of beta 2 i Wherein->
Figure BDA0002257297740000034
Represents the square of the vector 2 norm;
(7e) Judging whether an iteration termination condition is met:
if theta is satisfied simultaneously i >θ max And beta 2 i >β 2max And stopping iteration, and acquiring the rotation angle theta and a second radar observation visual angle beta 2 Otherwise, making i = i +1, and returning to the step (7 b);
(7f) Gradually reducing the rotation angle theta and the second radar observation visual angle beta 2 Repeating (7 b) - (7 e) to obtain the rotation angle theta and the second radar observation angle beta 2 Optimal estimated value
Figure BDA0002257297740000035
And &>
Figure BDA0002257297740000036
(8) The optimal estimated value of the rotation angle obtained according to the step (7)
Figure BDA0002257297740000038
And a second value for optimum estimate of radar angle of view->
Figure BDA0002257297740000037
Constructing an optimal fusion imaging model: and Y '= phi' sigma '+ n, solving a fused scattering coefficient vector sigma' corresponding to the optimal fusion imaging model by using a variational inference method, and reducing sigma 'into a fused scattering coefficient matrix omega' to obtain a final fusion imaging result.
The invention has the following advantages:
1. according to the invention, the parameterized dictionary is used, so that the dynamic learning of the target rotation angle and the radar observation visual angle difference in the radar detection process is realized, and therefore, the optimal estimation values of the target rotation angle and the radar observation visual angle difference can be obtained without performing a complicated scattering point extraction process.
2. The invention adopts a variational inference method to estimate the fused scattering coefficient vector, and realizes steady imaging under complex observation conditions such as low signal-to-noise ratio and the like.
3. The invention utilizes the multi-base radar to perform fusion imaging, and images by fusing information under different viewing angles, thereby improving the imaging quality of the target and being capable of acquiring target information more accurately.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a graph of the distribution of target equivalent scattering centers in the present invention;
FIG. 3 is a diagram of the first radar imaging result in the present invention;
FIG. 4 is a diagram of a second radar imaging result in the present invention;
FIG. 5 is a plot of mean square error of an unknown parameter estimate obtained using the present invention;
FIG. 6 is a diagram of the result of two radar fusion images obtained by the present invention.
Detailed Description
The following describes in detail specific embodiments and effects of the present invention with reference to the accompanying drawings.
Referring to fig. 1, the implementation steps of the invention are as follows:
step 1, ISAR records echo signals S of two radars 1 And S 2
ISAR records echo signals S of two radars 1 And S 2 The method is characterized in that after two electromagnetic waves transmitted by inverse synthetic aperture radars working at different observation angles meet a target in the transmission process, the target reflects the electromagnetic waves, the reflected echoes are received by a radar receiver, and an echo signal S of a first radar is displayed on a radar display 1 And echo signal S of the second radar 2
Step 2, echo signals S of two radars 1 And S 2 And compressing the distance direction pulse.
The range-wise pulse compression may be performed using matched filtering or line-out tone, but in this example, is not limited to line-out tone, and the following steps are performed:
2a) Taking the distance from the inverse synthetic aperture radar to the center of the scene as a reference distance, and taking a linear frequency modulation signal with the same carrier frequency and frequency modulation rate as the transmission signal of the inverse synthetic aperture radar as a reference signal;
2b) And (3) after the reference signal is conjugated, multiplying the reference signal by echo signals received by the two radars respectively to obtain echo signals after the two radars are subjected to line-disconnection frequency modulation:
Figure BDA0002257297740000051
Figure BDA0002257297740000052
wherein,
Figure BDA0002257297740000053
for fast time of distance, t m For azimuth slow time, S r1 (. Is a reference signal of the first radar, S r2 (. Is a reference signal of the second radar, S 11 For the first radar line-demodulated signal, S 22 The signal after the line frequency modulation is removed for the second radar, and represents conjugate operation;
2c) Two radar echo signals S after line-disconnection and frequency modulation 11 And S 22 Respectively performing one-dimensional inverse Fourier transform in the distance dimension to obtain echo signals S of the first radar after the distance direction pulse compression 1 ' and echo signal S of the second part of the radar 2 ′。
Step 3, constructing dictionary matrix phi of two radars 1 And phi 2
3a) Setting the length of a two-dimensional imaging scene in the x direction as A meters and the length of the two-dimensional imaging scene in the y direction as R meters;
3b) Equally dividing an imaging scene into K segments along the x direction, wherein the length of each segment is
Figure BDA0002257297740000054
Equally divided into L sections in the y direction, each section having a length->
Figure BDA0002257297740000055
The scattering coefficient of the imaging region can be expressed as a matrix omega, and then the scattering coefficient moment is expressedThe arrays are spliced in columns into a column vector sigma, wherein the dimension of omega is KxL, the dimension of sigma is Qx1, and Q = KxL;
3c) The two radars respectively acquire observation matrixes of the positions of all possible target scattering points on a two-dimensional imaging scene grid:
Figure BDA0002257297740000056
wherein->
Figure BDA0002257297740000057
An observation matrix representing the scattering point at (m, n) acquired by the first radar, is->
Figure BDA0002257297740000058
An observation matrix representing the scattering point at (m, n) acquired by the second radar, and->
Figure BDA0002257297740000059
And &>
Figure BDA00022572977400000510
All dimensions of (are N) r ×N a ,m=1,2,…,K,n=1,2,…,L;
3d) Vectorizing the observation matrixes of scattering points on all grids acquired by the two radars respectively:
Figure BDA0002257297740000061
wherein->
Figure BDA0002257297740000062
Wherein
Figure BDA0002257297740000063
Represents the first part of the radar acquiring the column vector vectorized by the observation matrix at the scattering point (m, n), and/or>
Figure BDA0002257297740000064
The second part of the radar acquires a vectorised column vector of an observation matrix of scattering points at (m, n), vec (-) being a vectorisation function, i.e. for each of the matricesOne column is stacked behind the previous column, and the matrix is converted into a column vector;
3e) And respectively forming a dictionary matrix by using column vectors obtained by the two radars after the observation matrixes of scattering points on the grids are vectorized:
and forming a dictionary matrix of the first radar by using the column vectors obtained by the first radar after vectorization of the observation matrixes of scattering points on all grids:
Figure BDA0002257297740000065
forming a dictionary matrix of the second part of radar by the column vectors obtained by the second part of radar after the observation matrixes of all scattering points on the grid are vectorized:
Figure BDA0002257297740000066
and 4, constructing a fusion imaging model.
4a) Two radar range direction pulse compressed echo signals S 1 ' and S 2 ' vectorization operation is respectively carried out to obtain two vectorized column vectors: y is 1 =vec(S 1 ′),Y 2 =vec(S 2 ') add Y 1 And Y 2 Connected to obtain an observation vector
Figure BDA0002257297740000067
4b) Dictionary matrix phi of two radars 1 And phi 2 The dictionary matrixes are obtained by column connection
Figure BDA0002257297740000068
4c) Constructing a fusion imaging model according to the observation vector Y and the dictionary matrix phi: y = Φ σ + n, where n is a noise vector.
Step 5, setting a rotation angle theta and a second radar observation visual angle beta 2 And optimizing the same to obtain the optimal estimated value
Figure BDA0002257297740000069
And &>
Figure BDA00022572977400000610
5a) Setting the initial estimation interval of the rotation angle theta as [ theta ] minmax ]The estimated step length is delta theta, and the second radar observation angle beta 2 Initial estimation interval of [ beta ] 2min2max ]Estimate step size as Δ β 2 The initial iteration number i =1;
5b) The rotation angle theta and the second radar observation visual angle beta are set according to the 5 a) 2 Calculating the value theta of the rotation angle theta of the ith iteration by using the estimation interval and the estimation step length i =θ min + i Δ θ and second radar view angle β 2 Value of beta 2 i =β 2min +iΔβ 2
5c) Constructing a dictionary matrix phi corresponding to the ith iterative fusion imaging model, and solving a scattering coefficient vector sigma corresponding to the ith iterative fusion imaging model, wherein the method specifically comprises the following steps:
5c1) Separately converting the real part and the imaginary part of the observation vector Y into a real vector
Figure BDA0002257297740000071
Figure BDA0002257297740000072
Figure BDA0002257297740000073
Has a dimension of 2 Mx 1, the real part and the imaginary part of the dictionary matrix phi are separately converted into a real matrix->
Figure BDA0002257297740000074
Figure BDA0002257297740000075
The dimension of (a) is 2M multiplied by 2Q, re represents a real part, im represents an imaginary part;
5c2) Setting an initial value of the iteration number as j =1 and a precision lambda p =10 5 P =1, \ 8230;, Q, accuracy matrix A 0 =diag(λ 1 ,…,λ Q ) Noise accuracy parameter c 0 =10 -3 The initial values of the four prior parameters are set as a 0 =b 0 =e 0 =f p 0 =10 -4 Initial value ω of coefficient vector ω 0 Setting a Q multiplied by 1 zero order vector, the maximum iteration number iter =20, and a termination threshold wth =10 -3
5c3) Respectively calculating a first parameter a of the j iteration j And a third parameter e for the jth iteration j
Figure BDA0002257297740000076
5c4) Calculating a covariance matrix for the jth iteration:
Figure BDA0002257297740000077
5c5) Calculating the mean vector of the jth iteration:
Figure BDA0002257297740000078
and let the weight vector mean value omega of the jth iteration j =μ j
5c6) Calculating a fourth parameter for the jth iteration:
Figure BDA0002257297740000079
wherein->
Figure BDA00022572977400000710
Figure BDA00022572977400000711
Is the mean value omega of the weight vector j P th element of (4), is selected>
Figure BDA00022572977400000712
As a covariance matrix sigma j Row p and column p;
5c7) Calculating a second parameter of the jth iteration:
Figure BDA0002257297740000081
5c8) Computing a precision matrix A of a jth iteration j Each element of (1):
Figure BDA0002257297740000082
5c9) Calculating the noise precision of the j iteration:
Figure BDA0002257297740000083
5c10) Judging whether an iteration termination condition is met:
if it satisfies
Figure BDA0002257297740000084
Or j > iter, terminating iteration to obtain a scattering coefficient vector sigma, and if j = j +1 and returning to the step 5c 3);
5d) Calculating the mean square error of the ith iteration observation vector Y and the product of the dictionary matrix phi and the scattering coefficient vector sigma:
Figure BDA0002257297740000085
and recording the corresponding value theta of the rotation angle theta i And a second radar view angle beta 2 Is greater than or equal to>
Figure BDA0002257297740000086
Wherein->
Figure BDA0002257297740000087
Represents the square of the vector 2 norm;
5e) Judging whether an iteration termination condition is met:
if theta is satisfied simultaneously i >θ max And beta 2 i >β 2max And stopping iteration, and acquiring the rotation angle theta and a second radar observation visual angle beta 2 Otherwise, let i = i +1, return to step 5 b);
5f) Gradually reducing the rotation angle theta and the second radar observation visual angle beta 2 Repeating 5 b) -5 e) to obtain the rotation angle theta and the second radar observation angle beta 2 Optimal estimated value
Figure BDA0002257297740000088
And &>
Figure BDA0002257297740000089
And 6, obtaining a final fusion imaging result.
6a) According to the optimal estimated value of the rotation angle obtained in the step 5
Figure BDA00022572977400000810
And the optimal estimation value of the observation visual angle of the second part of radar
Figure BDA00022572977400000811
Constructing an optimal fusion imaging model: the dictionary matrix corresponding to Y ' = phi ' sigma ' + n>
Figure BDA00022572977400000812
Wherein:
Figure BDA00022572977400000813
for the dictionary matrix of the first radar formed by the column vectors vectorized by the optimal observation matrix, the value is greater than or equal to>
Figure BDA00022572977400000814
Vectorized column vector of the optimal observation matrix for scattering points at (m, n) acquired for the first part of radar, -a->
Figure BDA0002257297740000091
The position (m, n) optimal observation matrix of scattering points, <' > based on the measured value>
Figure BDA0002257297740000092
Figure BDA0002257297740000093
For a dictionary matrix of a second radar part consisting of vectorised column vectors of an optimal observation matrix, based on a value of a reference value>
Figure BDA0002257297740000094
Vectorized column vector of the optimal observation matrix for scattering points at (m, n) acquired for the second part of the radar, and->
Figure BDA0002257297740000095
The position (m, n) optimal observation matrix of scattering points, <' > based on the measured value>
Figure BDA0002257297740000096
B is the signal bandwidth, c is the speed of light, f 0 On a signal carrier frequency,>
Figure BDA0002257297740000097
6b) Solving a target scattering coefficient vector sigma' corresponding to the optimal fusion imaging model by using a variational inference method, and specifically comprising the following steps of:
6b1) Separately converting the real part and the imaginary part of the dictionary matrix phi' into a real matrix
Figure BDA0002257297740000098
Figure BDA0002257297740000099
Has a dimension of 2M × 2Q;
6b2) Solving according to the steps 5c 2) -5c 10) to obtain a fused scattering coefficient vector sigma';
6c) And reducing the fused scattering coefficient vector sigma 'into a fused scattering coefficient matrix omega' to obtain a final fused imaging result.
The effects of the present invention can be further illustrated by the following simulations:
1. simulation parameters
Two radars working in X wave band are adopted to receive echo signals, corresponding carrier frequency is 10GHz, bandwidth is 600MHz, observation visual angle of radar 1 is 0 degree, and observation visual angle of radar 2 is 3 degrees. The length of the imaging scene is 12 meters, the width is 6 meters, the target comprises 33 scattering points, the target rotation angle is 0.0075rad, and the signal-to-noise ratio is set to be 5dB.
2. Emulated content
Simulation 1: fig. 2 was imaged by the first radar, and an ISAR two-dimensional image thereof was drawn, as a result, fig. 3.
Simulation 2: fig. 2 was imaged by the second radar, and an ISAR two-dimensional image thereof was drawn, as a result, fig. 4.
Simulation 3: the rotation angle and the observation angle of the second radar are estimated by utilizing the method, and a mean square error graph is drawn, and the result is shown in figure 5.
And (4) simulation: the fusion imaging result of the two radars to the image in fig. 2 is solved by using the method, and the ISAR two-dimensional image is drawn, wherein the result is shown in fig. 6.
Compared with the single radar imaging result, the method has the advantages that the obtained multi-base fusion image has less false points, all scattering points can be correctly recovered, the position and amplitude reconstruction of the scattering points is accurate, the image quality is obviously improved,
as can be seen from fig. 5, the target rotation angle estimation value corresponding to the minimum mean square error is 0.0075rad, and the second radar observation angle estimation value is 3 °.
Simulation results show that the multi-base ISAR fusion imaging problem is converted into a parameterized sparse representation problem by using a compressive sensing theory, the unknown parameter estimation and the fusion imaging are combined to solve, the high-quality target image can be obtained, meanwhile, the optimal estimation value of the unknown parameter can be obtained, and the method has the capability of obtaining the high-resolution ISAR two-dimensional image in short coherence time.

Claims (4)

1. The multi-base ISAR fusion imaging method based on variational Bayesian learning is characterized by comprising the following steps:
(1) Recording echo signal S of first radar by inverse synthetic aperture radar ISAR 1 And echo signal S of the second radar 2 ,S 1 And S 2 All dimensions ofIs N r ×N a In which N is r Number of sampling points in the direction of distance, N a Sampling points in the azimuth direction;
(2) Echo signals S for two radars 1 And S 2 Respectively compressing the range direction pulse to obtain a first radar echo signal S after the range direction pulse compression 1 ' and a second radar echo signal S 2 '; and combines the two echo signals S 1 ' and S 2 ' vectorization operation is respectively carried out to obtain two vectorized column vectors: y is 1 =vec(S 1 ′),Y 2 =vec(S 2 ′),Y 1 And Y 2 Are each N × 1, where N = N r ×N a
(3) Combining the two column vectors Y 1 And Y 2 Connected according to columns to obtain observation vectors
Figure FDA0003971499020000011
The observation vector has dimensions M × 1, where M =2N;
(4) Equally dividing an imaging scene into K sections along the x direction, wherein the length of each section is A; equally dividing the image into L sections along the y direction, wherein the length of each section is R, the scattering coefficient of the imaging region is expressed as a matrix omega, and splicing the scattering coefficient matrix into column vectors sigma according to columns, wherein the dimension of omega is KxL, the dimension of sigma is Qx1, and Q = KxL;
(5) Constructing a dictionary matrix Φ for the first part of the radar 1 And dictionary matrix Φ of second part radar 2 And combining the dictionary matrix phi of the two radars 1 And phi 2 Splicing according to columns to obtain a dictionary matrix corresponding to the observation vector with dimension of Mx 1
Figure FDA0003971499020000012
Wherein the dimension of phi is MxQ, phi 1 And phi 2 The dimensions of (A) are N multiplied by Q;
(6) Constructing a fusion imaging model according to the observation vector Y and the dictionary matrix phi: y = Φ σ + n, where n is a noise vector with dimension M × 1;
(7) Setting a rotation angle theta and a second radar observation visual angle beta 2 And optimizing the same to obtain the optimal estimated value
Figure FDA0003971499020000013
And &>
Figure FDA0003971499020000014
(7a) Setting the initial estimation interval of the rotation angle theta as [ theta ] minmax ]The estimated step length is delta theta, and the second radar observation angle beta 2 The initial estimation interval is
Figure FDA0003971499020000015
Estimate the step size as Δ β 2 The initial iteration number i =1;
(7b) According to the rotation angle theta set by (7 a) and the second radar observation visual angle beta 2 Calculating the value theta of the rotation angle theta of the ith iteration by using the estimation interval and the estimation step length i =θ min + i Δ θ and second radar view angle β 2 Value of (A)
Figure FDA0003971499020000028
(7c) Constructing a dictionary matrix phi corresponding to the ith iterative fusion imaging model, and solving a scattering coefficient vector sigma corresponding to the ith iterative fusion imaging model;
(7d) Calculating the mean square error of the ith iteration observation vector Y and the product of the dictionary matrix phi and the scattering coefficient vector sigma:
Figure FDA0003971499020000021
and recording the value theta of the corresponding rotation angle theta i And a second radar observation angle of view beta 2 Value of beta 2 i In which>
Figure FDA0003971499020000022
Represents the square of the vector 2 norm;
(7e) Judging whether an iteration termination condition is met:
if theta is satisfied at the same time i >θ max And
Figure FDA0003971499020000023
the iteration is terminated, and the rotation angle theta and the second radar observation visual angle beta are obtained 2 Otherwise, making i = i +1, and returning to the step (7 b);
(7f) Gradually reducing the rotation angle theta and the second radar observation visual angle beta 2 Repeating (7 b) - (7 e) to obtain the rotation angle theta and the second radar observation angle beta 2 Optimum estimated value
Figure FDA0003971499020000024
And &>
Figure FDA0003971499020000025
(8) The optimal estimated value of the rotation angle obtained according to the step (7)
Figure FDA0003971499020000026
And a second value for optimum estimate of radar angle of view->
Figure FDA0003971499020000027
Constructing an optimal fusion imaging model: and Y '= phi' sigma '+ n, solving a fused scattering coefficient vector sigma' corresponding to the optimal fusion imaging model by using a variational inference method, and reducing sigma 'into a fused scattering coefficient matrix omega' to obtain a final fusion imaging result.
2. The method of claim 1, wherein the range-wise pulse compression of the two radar echo signals in step (2) is performed by:
(2a) Taking the distance from the inverse synthetic aperture radar to the scene center as a reference distance, and taking a linear frequency modulation signal which is the same as the carrier frequency and the frequency modulation rate of a signal transmitted by the inverse synthetic aperture radar as a reference signal;
(2b) After the reference signal is conjugated, the reference signal is respectively connected with echo signals S received by two radars 1 And S 2 Multiplying to obtain the echo signal S of the first radar after the line-off frequency modulation 11 And echo signal S of the second radar 22
(2c) Two radar echo signals S after line-disconnection and frequency modulation 11 And S 22 Respectively performing one-dimensional inverse Fourier transform in a distance dimension to obtain an echo signal S of the first radar after the distance direction pulse compression 1 ' and echo signal S of the second part of the radar 2 ′。
3. The method of claim 1, wherein two radar dictionary matrices are constructed in step (5) by:
(5a) Two radars respectively acquire observation matrixes of positions of scattering points of all possible targets on two-dimensional imaging scene grid
Figure FDA0003971499020000031
And &>
Figure FDA0003971499020000032
Wherein->
Figure FDA0003971499020000033
Represents the observation matrix of the first radar for a scattering point located at (m, n), based on the value of the scattering point>
Figure FDA0003971499020000034
Represents the observation matrix of the second radar for a scattering point located at (m, n), and/or>
Figure FDA0003971499020000035
And &>
Figure FDA0003971499020000036
All dimensions of (are N) r ×N a ,m=1,2,…,K,n=1,2,…,L;
(5b) Respectively to be provided withVectorizing operation of observation matrixes of scattering points on all grids acquired by two radars to obtain vectorized column vectors
Figure FDA0003971499020000037
And &>
Figure FDA0003971499020000038
Wherein +>
Figure FDA0003971499020000039
Means for the first radar to acquire a vectored column vector, based on an observation matrix of scattering points at (m, n), and means for combining the column vector with the observation matrix of scattering points at (m, n)>
Figure FDA00039714990200000310
Representing that a second part of radar acquires a column vector after vectorization of an observation matrix of scattering points at (m, n), vec (-) is a vectorization function, namely, each column of the matrix is stacked behind the previous column of the matrix, and the matrix is converted into the column vector;
(5c) And respectively forming a dictionary matrix by using column vectors obtained by the two radars after the observation matrixes of scattering points on the grids are vectorized:
forming a dictionary matrix of the first radar by using column vectors obtained by vectorization of observation matrixes of scattering points on all grids by the first radar:
Figure FDA00039714990200000311
and forming a dictionary matrix of the second radar by using the column vectors obtained by the second radar after vectorization of the observation matrixes of scattering points on all grids:
Figure FDA00039714990200000312
4. the method of claim 1, wherein the step (7 c) of solving for the scattering coefficient vector σ is performed by:
(7c1) Will observe vector YThe real part and the imaginary part are separately converted into a real vector
Figure FDA00039714990200000313
Has a dimension of 2 Mx 1, the real part and the imaginary part of the dictionary matrix phi are separately converted into a real matrix->
Figure FDA00039714990200000314
The dimension of (a) is 2M multiplied by 2Q, re represents a real part, im represents an imaginary part;
(7c2) Setting an initial value of the iteration number as i =1 and a precision lambda p =10 5 P =1, \8230;, Q, accuracy matrix A 0 =diag(λ 1 ,…,λ Q ) Noise accuracy parameter c 0 =10 -3 The initial values of the four parameters are set as
Figure FDA0003971499020000041
Maximum iteration number iter =20, termination threshold wth =10 -3
(7c3) Respectively calculating a first parameter a of the ith iteration i And a third parameter e of the ith iteration i
Figure FDA0003971499020000042
(7c4) Calculating the covariance matrix of the ith iteration:
Figure FDA0003971499020000043
(7c5) Calculating the mean vector of the ith iteration:
Figure FDA0003971499020000044
and let the weight vector mean value omega of the ith iteration i =μ i
(7c6) Calculating a fourth parameter for the ith iteration:
Figure FDA0003971499020000045
wherein +>
Figure FDA0003971499020000046
Is the mean value omega of the weight vector i P th element of (4), is selected>
Figure FDA0003971499020000047
As a covariance matrix sigma i The element corresponding to the p row and the p column of (1);
(7c7) Calculating a second parameter for the ith iteration:
Figure FDA0003971499020000048
(7c8) Computing a precision matrix A of the ith iteration i Each element of (1):
Figure FDA0003971499020000049
(7c9) Calculating the noise precision of the ith iteration:
Figure FDA00039714990200000410
(7c10) Judging whether an iteration termination condition is met:
if it satisfies
Figure FDA00039714990200000411
Or i > iter, the iteration is terminated to obtain a target scattering coefficient vector sigma,
otherwise, let i = i +1, return to step (7 c 3).
CN201911058800.8A 2019-11-01 2019-11-01 Multi-base ISAR fusion imaging method based on variational Bayes learning Active CN110780298B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911058800.8A CN110780298B (en) 2019-11-01 2019-11-01 Multi-base ISAR fusion imaging method based on variational Bayes learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911058800.8A CN110780298B (en) 2019-11-01 2019-11-01 Multi-base ISAR fusion imaging method based on variational Bayes learning

Publications (2)

Publication Number Publication Date
CN110780298A CN110780298A (en) 2020-02-11
CN110780298B true CN110780298B (en) 2023-04-07

Family

ID=69388611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911058800.8A Active CN110780298B (en) 2019-11-01 2019-11-01 Multi-base ISAR fusion imaging method based on variational Bayes learning

Country Status (1)

Country Link
CN (1) CN110780298B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069651B (en) * 2020-07-23 2024-04-09 西安空间无线电技术研究所 Method for estimating spin-stabilized target rotation axis based on ISAR imaging
CN112198508B (en) * 2020-10-29 2024-07-26 中国人民武装警察部队工程大学 Radar target imaging and identifying method based on support set constraint
CN112612026B (en) * 2020-11-20 2022-06-21 哈尔滨工业大学 Target angle resolution method based on dual-radar range profile fusion
CN112558067B (en) * 2020-11-23 2023-11-03 哈尔滨工业大学 Radar imaging method based on fusion of range profile and ISAR (inverse synthetic aperture radar) image
CN112859074B (en) * 2021-01-14 2022-07-19 中国人民解放军陆军工程大学 Multi-band multi-view ISAR fusion imaging method
CN112859075B (en) * 2021-01-14 2022-07-19 中国人民解放军陆军工程大学 Multi-band ISAR fusion high-resolution imaging method
CN113126095B (en) * 2021-04-21 2022-08-30 西安电子科技大学 Two-dimensional ISAR rapid imaging method based on sparse Bayesian learning
CN115378591B (en) * 2022-07-18 2023-04-07 咚咚数字科技有限公司 Anonymous biological characteristic key transmission method based on fusion
CN116184405B (en) * 2023-02-03 2024-09-03 中国人民解放军陆军工程大学 ISAR (inverse synthetic aperture radar) double-station radar fusion imaging echo generation method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109507666A (en) * 2018-12-21 2019-03-22 西安电子科技大学 The sparse frequency band imaging method of ISAR based on off-network variation bayesian algorithm

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7598900B2 (en) * 2007-11-09 2009-10-06 The Boeing Company Multi-spot inverse synthetic aperture radar imaging
CN101685154B (en) * 2008-09-27 2012-12-26 清华大学 Image fusion method of double/multiple base inverse synthetic aperture radar
US9335409B2 (en) * 2013-03-20 2016-05-10 Raytheon Company Bistatic inverse synthetic aperture radar imaging
US9702971B2 (en) * 2014-03-17 2017-07-11 Raytheon Company High-availability ISAR image formation
CN107132535B (en) * 2017-04-07 2019-12-10 西安电子科技大学 ISAR sparse band imaging method based on variational Bayesian learning algorithm
KR102015177B1 (en) * 2017-09-27 2019-10-21 포항공과대학교 산학협력단 Apparatus for autofocusing and cross range scaling of isar image using compressive sensing and method thereof
CN109633647B (en) * 2019-01-21 2022-02-08 中国人民解放军陆军工程大学 Bistatic ISAR sparse aperture imaging method
CN110068805B (en) * 2019-05-05 2020-07-10 中国人民解放军国防科技大学 High-speed target HRRP reconstruction method based on variational Bayesian inference

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109507666A (en) * 2018-12-21 2019-03-22 西安电子科技大学 The sparse frequency band imaging method of ISAR based on off-network variation bayesian algorithm

Also Published As

Publication number Publication date
CN110780298A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN110780298B (en) Multi-base ISAR fusion imaging method based on variational Bayes learning
CN107132535B (en) ISAR sparse band imaging method based on variational Bayesian learning algorithm
CN101738614B (en) Method for estimating target rotation of inverse synthetic aperture radar based on time-space image sequence
CN111142105B (en) ISAR imaging method for complex moving target
Xu et al. Three-dimensional interferometric ISAR imaging for target scattering diagnosis and modeling
CN105842693B (en) A kind of method of the Dual-Channel SAR moving-target detection based on compressed sensing
CN109669182B (en) Passive bistatic SAR moving/static target joint sparse imaging method
CN102645651A (en) SAR (synthetic aperture radar) tomography super-resolution imaging method
CN102914773B (en) Multi-pass circumference SAR three-dimensional imaging method
Stojanovic et al. Joint space aspect reconstruction of wide-angle SAR exploiting sparsity
CN1327242C (en) Method for compensating relative motion of mobile multiple objective for reverse synthetic aperture radar
CN108646247A (en) Inverse synthetic aperture radar imaging method based on Gamma process linear regression
CN110596706B (en) Radar scattering sectional area extrapolation method based on three-dimensional image domain projection transformation
CN112230221A (en) RCS (Radar Cross section) measurement method based on three-dimensional sparse imaging
Heister et al. Coherent large beamwidth processing of radio-echo sounding data
Dong et al. High-resolution and wide-swath imaging of spaceborne SAR via random PRF variation constrained by the coverage diagram
Gong et al. High resolution 3D InISAR imaging of space targets based on PFA algorithm with single baseline
CN105022025B (en) Method for estimating signal wave direction based on sparse processing
CN116559905A (en) Undistorted three-dimensional image reconstruction method for moving target of bistatic SAR sea surface ship
CN108931770B (en) ISAR imaging method based on multi-dimensional beta process linear regression
CN114140325B (en) C-ADMN-based structured sparse aperture ISAR imaging method
Shuzhen et al. Near-field 3D imaging approach combining MJSR and FGG-NUFFT
Zhan et al. An ISAR imaging and cross-range scaling method based on phase difference and improved axis rotation transform
CN107255815A (en) A kind of target surface reconstructing method based on bistatic scattering center time-frequency characteristics
Wang et al. High-Resolution Insar Imaging Via Cs-Based Amplitude-Phase Separation Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant