CN110780298A - Multi-base ISAR fusion imaging method based on variational Bayes learning - Google Patents

Multi-base ISAR fusion imaging method based on variational Bayes learning Download PDF

Info

Publication number
CN110780298A
CN110780298A CN201911058800.8A CN201911058800A CN110780298A CN 110780298 A CN110780298 A CN 110780298A CN 201911058800 A CN201911058800 A CN 201911058800A CN 110780298 A CN110780298 A CN 110780298A
Authority
CN
China
Prior art keywords
radar
observation
matrix
vector
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911058800.8A
Other languages
Chinese (zh)
Other versions
CN110780298B (en
Inventor
白雪茹
赵志强
祁浩凡
周峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Electronic Science and Technology
Original Assignee
Xian University of Electronic Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Electronic Science and Technology filed Critical Xian University of Electronic Science and Technology
Priority to CN201911058800.8A priority Critical patent/CN110780298B/en
Publication of CN110780298A publication Critical patent/CN110780298A/en
Application granted granted Critical
Publication of CN110780298B publication Critical patent/CN110780298B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a multi-base ISAR fusion imaging method based on a variational Bayesian learning algorithm, which mainly solves the problems of multi-base ISAR image fusion and complicated target rotation parameter and radar observation visual angle difference estimation under the condition of low signal-to-noise ratio in the prior art. The scheme comprises the following steps: 1) receiving two radar echoes and performing range direction pulse compression; 2) vectorizing and splicing echoes after pulse compression in the range direction to obtain an observation vector; 3) constructing dictionary matrixes of two radars; 4) constructing a fusion imaging model according to the observation vector and dictionary matrixes of the two radars; 5) estimating a target rotation angle and a second radar observation angle; 6) and solving the fused scattering coefficient vector in the fusion imaging model to obtain a fusion imaging result. The method is simple in estimation, can obtain the optimal estimation of the target rotation parameters and the radar observation visual angle difference, realizes high-resolution fusion imaging of the target under the condition of low signal-to-noise ratio, and can be used for extracting and identifying the shape characteristics of the target.

Description

Multi-base ISAR fusion imaging method based on variational Bayes learning
Technical Field
The invention belongs to the technical field of radars, and further relates to a multi-base ISAR high-resolution fusion imaging method which can be used for extracting and identifying target shape features.
Background
With the rapid development of inverse synthetic aperture radar ISAR, although the existing imaging radar can provide a higher resolution, when observing space targets such as space debris, small satellites, and spacecraft, a two-dimensional radar image with a higher resolution needs to be obtained to accurately describe the characteristics of the space targets. The existing research shows that: the target high-resolution image observed in a large range from multiple visual angles can effectively improve the reliability of target identification, and meanwhile, the target observation data under different visual angles are fused to improve the resolution of an imaging result, so that a foundation is laid for high-performance target identification. Generally speaking, there are two ways to obtain a wide-range multi-view observation of a rotating table target, the first way is to obtain a large-view observation through a long-time observation by a single radar receiver, and the second way is to obtain a large-view observation in a short time by configuring multiple radar receivers located at different positions. As a basic object model, a turntable object model is usually built on a short time interval basis for modeling an actual non-cooperative moving object, such as an airplane, a ship, and the like. Because the motion characteristics of the object may change significantly over a long observation time, the translation compensation process becomes difficult. Therefore, in practical applications, the imaging system mostly adopts the second observation method at the cost of hardware increase, i.e. a spatially separated multi-receiver imaging system. Therefore, the research on multi-base ISAR high-resolution fusion imaging is of great significance.
The published article "Data-Level Fusion of Multilookup apparatuses Synthetic Aperture radio Images" (IEEE Transactions on Geoscience and remote Sensing,2008,46(5): 1394-. The method comprises the following implementation steps: constructing a double-view ISAR observation geometric model and an echo signal model; determining a dual-view ISAR data fusion rule; and solving the fused image by utilizing matrix Fourier transform. However, the method only considers the ISAR fusion imaging problem of the ideal turntable model under different viewing angles, omits the estimation of target rotation parameters and different ISAR observation viewing angle differences, and is not suitable for imaging of non-cooperative moving targets.
Invention patent publication No. filed by the university of qinghua: 101685154A, application number: 200810223234.7 discloses a double/multiple base inverse synthetic aperture radar ISAR image fusion method, which comprises the following steps: equally dividing target signals received by each base radar to obtain two range-Doppler images with equal resolution; scattering points of all Doppler images are extracted and correlated, and the view angle difference, the target rotating speed and the equivalent rotating center of different radars are estimated; and obtaining a target fusion imaging result by using a convolution-inverse projection method. However, this method needs to extract the position of the scattering point, and in the case of unfavorable conditions such as noise, the final imaging result is affected.
Disclosure of Invention
The invention aims to provide a multi-base ISAR high-resolution fusion imaging method based on a variational Bayesian learning algorithm, so as to realize accurate imaging of a target under the conditions of low signal-to-noise ratio, target rotation parameters and unknown observation visual angle differences of different radars, and finally obtain a two-dimensional ISAR image with good focus.
The basic idea of the invention is as follows: based on a compressive sensing theory, the multi-base ISAR high-resolution fusion imaging problem is converted into a sparse signal expression problem, the unknown parameter estimation problem and the fusion imaging problem are jointly solved, and a fused high-resolution two-dimensional ISAR image is obtained while the optimal estimation value of the unknown parameter is obtained. The implementation scheme comprises the following steps:
(1) recording echo signal S of first radar by inverse synthetic aperture radar ISAR 1And echo signal S of the second radar 2,S 1And S 2All dimensions of (are N) r×N aIn which N is rNumber of sampling points in the direction of distance, N aSampling points in the azimuth direction;
(2) echo signals S for two radars 1And S 2Respectively compressing the range direction pulse to obtain a first radar echo signal S after the range direction pulse compression 1' and a second radar echo signal S 2'; and combines the two echo signals S 1' and S 2' splicing into two column vectors Y respectively according to columns 1And Y 2,Y 1And Y 2Are each N × 1, where N ═ N r×N a
(3) Combining the twoColumn vector Y 1And Y 2Connected according to columns to obtain observation vectors
Figure BDA0002257297740000021
The dimension of the observation vector is mx 1, where M ═ 2N;
(4) equally dividing an imaging scene into K sections along the x direction, wherein the length of each section is A; equally dividing the image into L sections along the y direction, wherein the length of each section is R, the scattering coefficient of the imaging region is expressed as a matrix omega, and splicing the scattering coefficient matrix into column vectors sigma according to columns, wherein the dimension of omega is KxL, the dimension of sigma is Qx1, and Q is KxL;
(5) constructing a dictionary matrix Φ for the first part of the radar 1And dictionary matrix Φ of second part radar 2And combining the dictionary matrix phi of the two radars 1And phi 2Splicing according to columns to obtain a dictionary matrix corresponding to the observation vector with dimension of Mx 1
Figure BDA0002257297740000022
Wherein the dimension of phi is MxQ, phi 1And phi 2The dimensions of (A) are N multiplied by Q;
(6) constructing a fusion imaging model according to the observation vector Y and the dictionary matrix phi: y ═ Φ σ + n, where n is a noise vector with dimension mx 1;
(7) setting the rotation angle theta and the second radar observation angle β 2And optimizing the same to obtain the optimal estimated value
Figure BDA0002257297740000031
And
Figure BDA0002257297740000032
(7a) setting the initial estimation interval of the rotation angle theta as [ theta ] minmax]Estimated step size is Δ θ, and second radar perspective β 2The initial estimation interval is [ β ] 2min2max]The estimated step size is Δ β 2The initial iteration number i is 1;
(7b) the rotation angle theta and the second radar observation angle β set according to (7a) 2EstimatingCalculating the value theta of the rotation angle theta of the ith iteration by the interval and the estimated step length i=θ min+ i Δ θ and second radar perspective β 2Value of β 2 i=β 2min+iΔβ 2
(7c) Constructing a dictionary matrix phi corresponding to the ith iterative fusion imaging model, and solving a scattering coefficient vector sigma corresponding to the ith iterative fusion imaging model;
(7d) calculating the mean square error of the ith iteration observation vector Y and the product of the dictionary matrix phi and the scattering coefficient vector sigma:
Figure BDA0002257297740000033
and recording the corresponding value theta of the rotation angle theta iAnd a second radar perspective β 2Value of β 2 iWherein
Figure BDA0002257297740000034
Represents the square of the vector 2 norm;
(7e) judging whether an iteration termination condition is met:
if theta is satisfied simultaneously i>θ maxAnd β 2 i>β 2maxThe iteration is terminated and the rotation angle theta and the second radar view β are obtained 2Otherwise, making i equal to i +1, and returning to the step (7 b);
(7f) the rotation angle theta and the second radar observation angle β are gradually reduced 2Repeating (7b) - (7e) to obtain the rotation angle theta and the second radar observation angle β 2Optimal estimated value
Figure BDA0002257297740000035
And
Figure BDA0002257297740000036
(8) the optimal estimated value of the rotation angle obtained according to the step (7)
Figure BDA0002257297740000038
And a second mineOptimal estimation value of observation visual angle
Figure BDA0002257297740000037
Constructing an optimal fusion imaging model: and Y ', solving a fused scattering coefficient vector sigma' corresponding to the optimal fused imaging model by using a variational inference method, and reducing sigma 'into a fused scattering coefficient matrix omega' to obtain a final fused imaging result.
The invention has the following advantages:
1. according to the invention, the parameterized dictionary is used, so that the dynamic learning of the target rotation angle and the radar observation visual angle difference in the radar detection process is realized, and therefore, the optimal estimation values of the target rotation angle and the radar observation visual angle difference can be obtained without performing a complicated scattering point extraction process.
2. The invention adopts a variational inference method to estimate the fused scattering coefficient vector, and realizes steady imaging under complex observation conditions such as low signal-to-noise ratio and the like.
3. The invention utilizes the multi-base radar to perform fusion imaging, and images by fusing information under different viewing angles, thereby improving the imaging quality of the target and being capable of acquiring target information more accurately.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a graph of the distribution of target equivalent scattering centers in the present invention;
FIG. 3 is a diagram of the first radar imaging result in the present invention;
FIG. 4 is a diagram of a second radar imaging result in the present invention;
FIG. 5 is a plot of mean square error of an unknown parameter estimate obtained using the present invention;
FIG. 6 is a diagram of the result of two radar fusion images obtained by the present invention.
Detailed Description
The following describes in detail specific embodiments and effects of the present invention with reference to the accompanying drawings.
Referring to fig. 1, the implementation steps of the invention are as follows:
step 1, ISAR records echo signals S of two radars 1And S 2
ISAR records echo signals S of two radars 1And S 2The method is characterized in that after two electromagnetic waves transmitted by inverse synthetic aperture radars working at different observation angles meet a target in the transmission process, the target reflects the electromagnetic waves, the reflected echoes are received by a radar receiver, and an echo signal S of a first radar is displayed on a radar display 1And echo signal S of the second radar 2
Step 2, echo signals S of two radars 1And S 2And compressing the distance direction pulse.
The range-wise pulse compression may be performed using matched filtering or line-out tone, but in this example, is not limited to line-out tone, and the following steps are performed:
2a) taking the distance from the inverse synthetic aperture radar to the center of the scene as a reference distance, and taking a linear frequency modulation signal with the same carrier frequency and frequency modulation rate as the transmission signal of the inverse synthetic aperture radar as a reference signal;
2b) and (3) after the reference signal is conjugated, multiplying the reference signal by echo signals received by the two radars respectively to obtain echo signals after the two radars are subjected to line-disconnection frequency modulation:
Figure BDA0002257297740000051
Figure BDA0002257297740000052
wherein the content of the first and second substances,
Figure BDA0002257297740000053
for a fast time of distance, t mFor azimuth slow time, S r1(. is a reference signal of the first radar, S r2(. is a reference signal, S, of a second radar 11For the first radar line-demodulated signal, S 22The signal after the line frequency modulation is removed for the second radar, and the conjugate operation is represented;
2c) two radar echo signals S after line-disconnection and frequency modulation 11And S 22Respectively performing one-dimensional inverse Fourier transform in the distance dimension to obtain echo signals S of the first radar after the distance direction pulse compression 1' and echo signal S of the second part of the radar 2′。
Step 3, constructing dictionary matrix phi of two radars 1And phi 2
3a) Setting the length of a two-dimensional imaging scene in the x direction as A meters and the length of the two-dimensional imaging scene in the y direction as R meters;
3b) equally dividing an imaging scene into K segments along the x direction, wherein the length of each segment is
Figure BDA0002257297740000054
Are equally divided into L sections along the y direction, and each section has the length of
Figure BDA0002257297740000055
The scattering coefficient of the imaging region can be expressed as a matrix omega, and then the scattering coefficient matrix is spliced into a column vector sigma according to columns, wherein the dimensionality of omega is KxL, the dimensionality of sigma is Qx1, and Q is KxL;
3c) the two radars respectively acquire observation matrixes of the positions of all possible target scattering points on a two-dimensional imaging scene grid:
Figure BDA0002257297740000056
wherein
Figure BDA0002257297740000057
An observation matrix representing scattering points at (m, n) acquired by a first radar,
Figure BDA0002257297740000058
an observation matrix representing scattering points at (m, n) acquired by the second radar,
Figure BDA0002257297740000059
and all dimensions of (are N) r×N a,m=1,2,…,K,n=1,2,…,L;
3d) Vectorizing the observation matrixes of scattering points on all grids acquired by the two radars respectively:
Figure BDA0002257297740000061
wherein
Figure BDA0002257297740000062
Wherein
Figure BDA0002257297740000063
Representing the first part of the radar acquiring the column vector vectorized by the observation matrix of the scattering points at (m, n),
Figure BDA0002257297740000064
representing that a second part of radar acquires a column vector after vectorization of an observation matrix of scattering points at (m, n), vec (-) is a vectorization function, namely, each column of the matrix is stacked behind the previous column of the matrix, and the matrix is converted into the column vector;
3e) and respectively forming a dictionary matrix by using column vectors obtained by the two radars after the observation matrixes of scattering points on the grids are vectorized:
and forming a dictionary matrix of the first radar by using the column vectors obtained by the first radar after vectorization of the observation matrixes of scattering points on all grids:
and forming a dictionary matrix of the second radar by using the column vectors obtained by the second radar after vectorization of the observation matrixes of scattering points on all grids:
and 4, constructing a fusion imaging model.
4a) Two radar range direction pulse compressed echo signals S 1' and S 2' vectorization operation is respectively carried out to obtain two vectorized column vectors: y is 1=vec(S 1′),Y 2=vec(S 2') add Y 1And Y 2Connected to obtain an observation vector
4b) Dictionary matrix phi of two radars 1And phi 2The dictionary matrixes are obtained by column connection
4c) Constructing a fusion imaging model according to the observation vector Y and the dictionary matrix phi: y ═ Φ σ + n, where n is the noise vector.
Step 5, setting the rotation angle theta and the second radar observation angle β 2And optimizing the same to obtain the optimal estimated value
Figure BDA0002257297740000069
And
5a) setting the initial estimation interval of the rotation angle theta as [ theta ] minmax]Estimated step size is Δ θ, and second radar perspective β 2The initial estimation interval is [ β ] 2min2max]The estimated step size is Δ β 2The initial iteration number i is 1;
5b) rotation angle theta set according to 5a) and second radar observation angle β 2Calculating the value theta of the rotation angle theta of the ith iteration by using the estimation interval and the estimation step length i=θ min+ i Δ θ and second radar perspective β 2Value of β 2 i=β 2min+iΔβ 2
5c) Constructing a dictionary matrix phi corresponding to the ith iterative fusion imaging model, and solving a scattering coefficient vector sigma corresponding to the ith iterative fusion imaging model, wherein the method specifically comprises the following steps:
5c1) separately converting the real part and the imaginary part of the observation vector Y into a real vector
Figure BDA0002257297740000071
Figure BDA0002257297740000072
Figure BDA0002257297740000073
Has dimension of 2 Mx 1, and converts the real part and the imaginary part of the dictionary matrix phi into a real matrix separately
Figure BDA0002257297740000074
Figure BDA0002257297740000075
Dimension of (2M) is multiplied by 2Q, Re represents a real part, Im represents an imaginary part;
5c2) setting the initial value of the iteration number as j to 1 and the precision lambda p=10 5 P 1, …, Q, accuracy matrix a 0=diag(λ 1,…,λ Q) Noise accuracy parameter c 0=10 -3The initial values of the four prior parameters are set as a 0=b 0=e 0=f p 0=10 -4Initial value ω of coefficient vector ω 0Let Q × 1 zero vector be, 20 for the maximum iteration number iter, 10 for the termination threshold wth -3
5c3) Respectively calculating a first parameter a of the j iteration jAnd a third parameter e for the jth iteration j
5c4) Calculating a covariance matrix for the jth iteration:
Figure BDA0002257297740000077
5c5) calculating the mean vector of the jth iteration:
Figure BDA0002257297740000078
and let the weight vector mean value omega of the jth iteration j=μ j
5c6) Calculating a fourth parameter for the jth iteration: wherein
Figure BDA00022572977400000710
Figure BDA00022572977400000711
Is the mean value omega of the weight vector jThe p-th element of (a),
Figure BDA00022572977400000712
as a covariance matrix sigma jRow p and column p;
5c7) calculating a second parameter for the jth iteration:
Figure BDA0002257297740000081
5c8) computing a precision matrix A of a jth iteration jEach element of (1):
Figure BDA0002257297740000082
5c9) calculating the noise precision of the j iteration:
Figure BDA0002257297740000083
5c10) judging whether an iteration termination condition is met:
if it satisfies Or j > iter, terminating the iteration, obtaining a scattering coefficient vector sigma, and if j is not satisfied, making j equal to j +1, and returning to step 5c 3);
5d) calculating the mean square error of the ith iteration observation vector Y and the product of the dictionary matrix phi and the scattering coefficient vector sigma: and record the correspondingValue theta of rotation angle theta iAnd a second radar perspective β 2Value of (A)
Figure BDA0002257297740000086
Wherein
Figure BDA0002257297740000087
Represents the square of the vector 2 norm;
5e) judging whether an iteration termination condition is met:
if theta is satisfied simultaneously i>θ maxAnd β 2 i>β 2maxThe iteration is terminated and the rotation angle theta and the second radar view β are obtained 2Otherwise, making i equal to i +1, and returning to the step 5 b);
5f) the rotation angle theta and the second radar observation angle β are gradually reduced 2Repeat 5b) -5e) to obtain the rotation angle theta and the second radar perspective β 2Optimal estimated value And
Figure BDA0002257297740000089
and 6, obtaining a final fusion imaging result.
6a) According to the optimal estimated value of the rotation angle obtained in the step 5
Figure BDA00022572977400000810
And the optimal estimation value of the observation visual angle of the second part of radar
Figure BDA00022572977400000811
Constructing an optimal fusion imaging model: y ═ Φ 'σ' + n for the dictionary matrix:
Figure BDA00022572977400000812
wherein:
Figure BDA00022572977400000813
the dictionary matrix of the first radar part formed by the column vectors after the optimal observation matrix vectorization,
Figure BDA00022572977400000814
vectorized column vectors of an optimal observation matrix for scattering points at (m, n) acquired for the first part of radar,
Figure BDA0002257297740000091
an optimal observation matrix of scattering points at (m, n) acquired for the first radar,
Figure BDA0002257297740000092
Figure BDA0002257297740000093
the dictionary matrix of the second part of radar formed by the column vectors after the optimal observation matrix vectorization, vectorized column vectors of an optimal observation matrix for scattering points at (m, n) acquired for the second part of radar,
Figure BDA0002257297740000095
an optimal observation matrix for scattering points at (m, n) acquired for the second radar,
Figure BDA0002257297740000096
b is the signal bandwidth, c is the speed of light, f 0Is the carrier frequency of the signal and,
Figure BDA0002257297740000097
6b) solving a target scattering coefficient vector sigma' corresponding to the optimal fusion imaging model by using a variational inference method, and specifically comprising the following steps of:
6b1) separately converting the real part and the imaginary part of the dictionary matrix phi' into a real matrix
Figure BDA0002257297740000099
Has a dimension of 2M × 2Q;
6b2) solving according to the steps 5c2) -5c10) to obtain a fused scattering coefficient vector sigma';
6c) and reducing the fused scattering coefficient vector sigma 'into a fused scattering coefficient matrix omega' to obtain a final fused imaging result.
The effects of the present invention can be further illustrated by the following simulations:
1. simulation parameters
Two radars working in X wave band are adopted to receive echo signals, corresponding carrier frequency is 10GHz, bandwidth is 600MHz, observation visual angle of radar 1 is 0 degree, and observation visual angle of radar 2 is 3 degrees. The length of the imaging scene is 12 meters, the width is 6 meters, the target comprises 33 scattering points, the target rotation angle is 0.0075rad, and the signal-to-noise ratio is set to be 5 dB.
2. Emulated content
Simulation 1: fig. 2 was imaged by the first radar, and an ISAR two-dimensional image thereof was drawn, as a result, fig. 3.
Simulation 2: fig. 2 was imaged by the second radar, and an ISAR two-dimensional image thereof was drawn, as a result, fig. 4.
Simulation 3: the rotation angle and the observation angle of the second radar are estimated by utilizing the method, and a mean square error graph is drawn, and the result is shown in figure 5.
And (4) simulation: the fusion imaging result of the two radars to the image in fig. 2 is solved by using the method, and the ISAR two-dimensional image is drawn, wherein the result is shown in fig. 6.
Compared with the single radar imaging result, the multi-base fusion image obtained by the method has less false points, can correctly recover all scattering points, has accurate position and amplitude reconstruction of the scattering points, obviously improves the image quality,
as can be seen from fig. 5, the target rotation angle estimation value corresponding to the minimum mean square error is 0.0075rad, and the second radar observation angle estimation value is 3 °.
Simulation results show that the multi-base ISAR fusion imaging problem is converted into a parameterized sparse representation problem by using a compressive sensing theory, the unknown parameter estimation and the fusion imaging are combined to solve, the high-quality target image can be obtained, meanwhile, the optimal estimation value of the unknown parameter can be obtained, and the method has the capability of obtaining the high-resolution ISAR two-dimensional image in short coherence time.

Claims (4)

1. The multi-base ISAR fusion imaging method based on variational Bayesian learning is characterized by comprising the following steps:
(1) recording echo signal S of first radar by inverse synthetic aperture radar ISAR 1And echo signal S of the second radar 2,S 1And S 2All dimensions of (are N) r×N aIn which N is rNumber of sampling points in the direction of distance, N aSampling points in the azimuth direction;
(2) echo signals S for two radars 1And S 2Respectively compressing the range direction pulse to obtain a first radar echo signal S after the range direction pulse compression 1' and a second radar echo signal S 2'; and combines the two echo signals S 1' and S 2' splicing into two column vectors Y respectively according to columns 1And Y 2,Y 1And Y 2Are each N × 1, where N ═ N r×N a
(3) Combining the two column vectors Y 1And Y 2Connected according to columns to obtain observation vectors
Figure FDA0002257297730000011
The dimension of the observation vector is mx 1, where M ═ 2N;
(4) equally dividing an imaging scene into K sections along the x direction, wherein the length of each section is A; equally dividing the image into L sections along the y direction, wherein the length of each section is R, the scattering coefficient of the imaging region is expressed as a matrix omega, and splicing the scattering coefficient matrix into column vectors sigma according to columns, wherein the dimension of omega is KxL, the dimension of sigma is Qx1, and Q is KxL;
(5) constructing the first part of the radarDictionary matrix phi 1And dictionary matrix Φ of second part radar 2And combining the dictionary matrix phi of the two radars 1And phi 2Splicing according to columns to obtain a dictionary matrix corresponding to the observation vector with dimension of Mx 1
Figure FDA0002257297730000012
Wherein the dimension of phi is MxQ, phi 1And phi 2The dimensions of (A) are N multiplied by Q;
(6) constructing a fusion imaging model according to the observation vector Y and the dictionary matrix phi: y ═ Φ σ + n, where n is a noise vector with dimension mx 1;
(7) setting the rotation angle theta and the second radar observation angle β 2And optimizing the same to obtain the optimal estimated value
Figure FDA0002257297730000013
And
Figure FDA0002257297730000014
(7a) setting the initial estimation interval of the rotation angle theta as [ theta ] minmax]Estimated step size is Δ θ, and second radar perspective β 2The initial estimation interval is
Figure FDA0002257297730000015
Estimate step size as Δ β 2The initial iteration number i is 1;
(7b) the rotation angle theta and the second radar observation angle β set according to (7a) 2Calculating the value theta of the rotation angle theta of the ith iteration by using the estimation interval and the estimation step length i=θ min+ i Δ θ and second radar perspective β 2Value of (A)
Figure FDA0002257297730000021
(7c) Constructing a dictionary matrix phi corresponding to the ith iterative fusion imaging model, and solving a scattering coefficient vector sigma corresponding to the ith iterative fusion imaging model;
(7d) calculate the ith iterationMean square error of the product of the observation vector Y and the dictionary matrix Φ with the scattering coefficient vector σ:
Figure FDA0002257297730000022
and recording the corresponding value theta of the rotation angle theta iAnd a second radar perspective β 2Value of β 2 iWherein Represents the square of the vector 2 norm;
(7e) judging whether an iteration termination condition is met:
if theta is satisfied simultaneously i>θ maxAnd
Figure FDA0002257297730000024
the iteration is terminated, and the rotation angle theta and the second radar observation angle β are obtained 2Otherwise, making i equal to i +1, and returning to the step (7 b);
(7f) the rotation angle theta and the second radar observation angle β are gradually reduced 2Repeating (7b) - (7e) to obtain the rotation angle theta and the second radar observation angle β 2Optimal estimated value
Figure FDA0002257297730000025
And
Figure FDA0002257297730000026
(8) the optimal estimated value of the rotation angle obtained according to the step (7)
Figure FDA0002257297730000027
And the optimal estimation value of the observation visual angle of the second part of radar
Figure FDA0002257297730000028
Constructing an optimal fusion imaging model: solving the optimal fusion by using a variational inference method to obtain a dictionary matrix phi 'corresponding to Y' ═ phi 'sigma' + nAnd (3) the fused scattering coefficient vector sigma 'corresponding to the image model is reduced to be a fused scattering coefficient matrix omega', and a final fused imaging result is obtained.
2. The method according to claim 1, wherein the step (2) of performing range-wise pulse compression on the two radar echo signals comprises the following steps:
(2a) taking the distance from the inverse synthetic aperture radar to the scene center as a reference distance, and taking a linear frequency modulation signal which is the same as the carrier frequency and the frequency modulation rate of a signal transmitted by the inverse synthetic aperture radar as a reference signal;
(2b) after the reference signal is conjugated, the reference signal is respectively connected with echo signals S received by two radars 1And S 2Multiplying to obtain the echo signal S of the first radar after the line-off frequency modulation 11And echo signal S of the second radar 22
(2c) Two radar echo signals S after line-disconnection and frequency modulation 11And S 22Respectively performing one-dimensional inverse Fourier transform in the distance dimension to obtain echo signals S of the first radar after the distance direction pulse compression 1' and echo signal S of the second part of the radar 2′。
3. The method of claim 1, wherein two radar dictionary matrices are constructed in step (5) by:
(5a) two radars respectively acquire observation matrixes of positions of scattering points of all possible targets on two-dimensional imaging scene grid
Figure FDA0002257297730000031
And wherein
Figure FDA0002257297730000033
An observation matrix representing a first radar with respect to scattering points located at (m, n),
Figure FDA0002257297730000034
an observation matrix representing the second radar for scattering points located at (m, n),
Figure FDA0002257297730000035
and
Figure FDA0002257297730000036
all dimensions of (are N) r×N a,m=1,2,…,K,n=1,2,…,L;
(5b) Respectively carrying out vectorization operation on observation matrixes of scattering points on all grids acquired by two radars to obtain vectorized column vectors And
Figure FDA0002257297730000038
wherein
Figure FDA0002257297730000039
Representing the first part of the radar acquiring the column vector vectorized by the observation matrix of the scattering points at (m, n),
Figure FDA00022572977300000310
representing that a second part of radar acquires a column vector after vectorization of an observation matrix of scattering points at (m, n), vec (-) is a vectorization function, namely, each column of the matrix is stacked behind the previous column of the matrix, and the matrix is converted into the column vector;
(5c) and respectively forming a dictionary matrix by using column vectors obtained by the two radars after the observation matrixes of scattering points on the grids are vectorized:
and forming a dictionary matrix of the first radar by using the column vectors obtained by the first radar after vectorization of the observation matrixes of scattering points on all grids:
Figure FDA00022572977300000311
and forming a dictionary matrix of the second radar by using the column vectors obtained by the second radar after vectorization of the observation matrixes of scattering points on all grids:
Figure FDA00022572977300000312
4. the method of claim 1, wherein the step (7c) of solving for the scattering coefficient vector σ is performed by:
(7c1) separately converting the real part and the imaginary part of the observation vector Y into a real vector
Figure FDA00022572977300000313
Figure FDA00022572977300000314
Has dimension of 2 Mx 1, and converts the real part and the imaginary part of the dictionary matrix phi into a real matrix separately
Figure FDA00022572977300000317
Dimension of (2M) is multiplied by 2Q, Re represents a real part, Im represents an imaginary part;
(7c2) setting the initial value of the iteration number as i equal to 1 and the precision lambda p=10 5P 1, …, Q, accuracy matrix a 0=diag(λ 1,…,λ Q) Noise accuracy parameter c 0=10 -3The initial values of the four parameters are set as
Figure FDA0002257297730000041
Maximum iteration number iter is 20, and termination threshold wth is 10 -3
(7c3) Respectively calculating a first parameter a of the ith iteration iAnd a third parameter e of the ith iteration i
Figure FDA0002257297730000042
(7c4) Calculating the covariance matrix of the ith iteration:
Figure FDA0002257297730000043
(7c5) calculating the mean vector of the ith iteration:
Figure FDA0002257297730000044
and let the weight vector mean value omega of the ith iteration i=μ i
(7c6) Calculating a fourth parameter for the ith iteration:
Figure FDA0002257297730000045
wherein
Figure FDA0002257297730000046
Figure FDA0002257297730000047
Is the mean value omega of the weight vector iThe p-th element of (a),
Figure FDA0002257297730000048
as a covariance matrix sigma iRow p and column p;
(7c7) calculating a second parameter for the ith iteration:
Figure FDA0002257297730000049
(7c8) computing a precision matrix A of the ith iteration iEach element of (1):
Figure FDA00022572977300000410
(7c9) calculating the i-th iterationNoise precision:
Figure FDA00022572977300000411
(7c10) judging whether an iteration termination condition is met:
if it satisfies
Figure FDA00022572977300000412
Or i > iter, the iteration is terminated to obtain a target scattering coefficient vector sigma,
otherwise, let i equal to i +1, return to step (7c 3).
CN201911058800.8A 2019-11-01 2019-11-01 Multi-base ISAR fusion imaging method based on variational Bayes learning Active CN110780298B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911058800.8A CN110780298B (en) 2019-11-01 2019-11-01 Multi-base ISAR fusion imaging method based on variational Bayes learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911058800.8A CN110780298B (en) 2019-11-01 2019-11-01 Multi-base ISAR fusion imaging method based on variational Bayes learning

Publications (2)

Publication Number Publication Date
CN110780298A true CN110780298A (en) 2020-02-11
CN110780298B CN110780298B (en) 2023-04-07

Family

ID=69388611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911058800.8A Active CN110780298B (en) 2019-11-01 2019-11-01 Multi-base ISAR fusion imaging method based on variational Bayes learning

Country Status (1)

Country Link
CN (1) CN110780298B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069651A (en) * 2020-07-23 2020-12-11 西安空间无线电技术研究所 Spin-stabilized target rotating shaft estimation method based on ISAR imaging
CN112558067A (en) * 2020-11-23 2021-03-26 哈尔滨工业大学 Radar imaging method based on range image and ISAR image fusion
CN112612026A (en) * 2020-11-20 2021-04-06 哈尔滨工业大学 Target angle resolution method based on dual-radar range profile fusion
CN112859074A (en) * 2021-01-14 2021-05-28 中国人民解放军陆军工程大学 Multi-band multi-view ISAR fusion imaging method
CN112859075A (en) * 2021-01-14 2021-05-28 中国人民解放军陆军工程大学 Multi-band ISAR fusion high-resolution imaging method
CN113126095A (en) * 2021-04-21 2021-07-16 西安电子科技大学 Two-dimensional ISAR rapid imaging method based on sparse Bayesian learning
CN115378591A (en) * 2022-07-18 2022-11-22 咚咚数字科技有限公司 Anonymous biological characteristic key transmission method based on fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090121926A1 (en) * 2007-11-09 2009-05-14 The Boeing Company Multi-spot inverse synthetic aperture radar imaging
CN101685154A (en) * 2008-09-27 2010-03-31 清华大学 Image fusion method of double/multiple base inverse synthetic aperture radar
WO2014149095A2 (en) * 2013-03-20 2014-09-25 Raytheon Company Bistatic inverse synthetic aperture radar imaging
US20150260839A1 (en) * 2014-03-17 2015-09-17 Raytheon Company High-availability isar image formation
CN107132535A (en) * 2017-04-07 2017-09-05 西安电子科技大学 The sparse frequency band imaging methods of ISAR based on Variational Bayesian Learning algorithm
CN109507666A (en) * 2018-12-21 2019-03-22 西安电子科技大学 The sparse frequency band imaging method of ISAR based on off-network variation bayesian algorithm
KR20190036325A (en) * 2017-09-27 2019-04-04 포항공과대학교 산학협력단 Apparatus for autofocusing and cross range scaling of isar image using compressive sensing and method thereof
CN109633647A (en) * 2019-01-21 2019-04-16 中国人民解放军陆军工程大学 A kind of bistatic ISAR sparse aperture imaging method
CN110068805A (en) * 2019-05-05 2019-07-30 中国人民解放军国防科技大学 High-speed target HRRP reconstruction method based on variational Bayesian inference

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090121926A1 (en) * 2007-11-09 2009-05-14 The Boeing Company Multi-spot inverse synthetic aperture radar imaging
CN101685154A (en) * 2008-09-27 2010-03-31 清华大学 Image fusion method of double/multiple base inverse synthetic aperture radar
WO2014149095A2 (en) * 2013-03-20 2014-09-25 Raytheon Company Bistatic inverse synthetic aperture radar imaging
US20150260839A1 (en) * 2014-03-17 2015-09-17 Raytheon Company High-availability isar image formation
CN107132535A (en) * 2017-04-07 2017-09-05 西安电子科技大学 The sparse frequency band imaging methods of ISAR based on Variational Bayesian Learning algorithm
KR20190036325A (en) * 2017-09-27 2019-04-04 포항공과대학교 산학협력단 Apparatus for autofocusing and cross range scaling of isar image using compressive sensing and method thereof
CN109507666A (en) * 2018-12-21 2019-03-22 西安电子科技大学 The sparse frequency band imaging method of ISAR based on off-network variation bayesian algorithm
CN109633647A (en) * 2019-01-21 2019-04-16 中国人民解放军陆军工程大学 A kind of bistatic ISAR sparse aperture imaging method
CN110068805A (en) * 2019-05-05 2019-07-30 中国人民解放军国防科技大学 High-speed target HRRP reconstruction method based on variational Bayesian inference

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LIN SUN: "Improved Bayesian ISAR Imaging by Learning the Local Structures of the Target Scene", 《IEEE SENSORS JOURNAL》 *
XIAOXIU ZHU: "Bi-ISAR sparse imaging algorithm with complex Gaussian scale mixture prior", 《IET RADAR, SONAR & NAVIGATION》 *
晋良念等: "穿墙雷达扩展目标自聚焦稀疏成像方法", 《雷达科学与技术》 *
朱晓秀等: "双基地角时变下的ISAR稀疏孔径自聚焦成像", 《航空学报》 *
王天云等: "基于贝叶斯压缩感知的FD-MIMO雷达Off-Grid目标稀疏成像", 《电子学报》 *
黄萍: "稀疏频带逆合成孔径雷达高分辨成像方法研究", 《中国优秀硕博士学位论文(硕士)信息科技辑》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069651A (en) * 2020-07-23 2020-12-11 西安空间无线电技术研究所 Spin-stabilized target rotating shaft estimation method based on ISAR imaging
CN112069651B (en) * 2020-07-23 2024-04-09 西安空间无线电技术研究所 Method for estimating spin-stabilized target rotation axis based on ISAR imaging
CN112612026B (en) * 2020-11-20 2022-06-21 哈尔滨工业大学 Target angle resolution method based on dual-radar range profile fusion
CN112612026A (en) * 2020-11-20 2021-04-06 哈尔滨工业大学 Target angle resolution method based on dual-radar range profile fusion
CN112558067A (en) * 2020-11-23 2021-03-26 哈尔滨工业大学 Radar imaging method based on range image and ISAR image fusion
CN112558067B (en) * 2020-11-23 2023-11-03 哈尔滨工业大学 Radar imaging method based on fusion of range profile and ISAR (inverse synthetic aperture radar) image
CN112859074A (en) * 2021-01-14 2021-05-28 中国人民解放军陆军工程大学 Multi-band multi-view ISAR fusion imaging method
CN112859075B (en) * 2021-01-14 2022-07-19 中国人民解放军陆军工程大学 Multi-band ISAR fusion high-resolution imaging method
CN112859074B (en) * 2021-01-14 2022-07-19 中国人民解放军陆军工程大学 Multi-band multi-view ISAR fusion imaging method
CN112859075A (en) * 2021-01-14 2021-05-28 中国人民解放军陆军工程大学 Multi-band ISAR fusion high-resolution imaging method
CN113126095A (en) * 2021-04-21 2021-07-16 西安电子科技大学 Two-dimensional ISAR rapid imaging method based on sparse Bayesian learning
CN115378591A (en) * 2022-07-18 2022-11-22 咚咚数字科技有限公司 Anonymous biological characteristic key transmission method based on fusion
CN115378591B (en) * 2022-07-18 2023-04-07 咚咚数字科技有限公司 Anonymous biological characteristic key transmission method based on fusion

Also Published As

Publication number Publication date
CN110780298B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110780298B (en) Multi-base ISAR fusion imaging method based on variational Bayes learning
CN107132535B (en) ISAR sparse band imaging method based on variational Bayesian learning algorithm
Xu et al. Sparse synthetic aperture radar imaging from compressed sensing and machine learning: Theories, applications, and trends
CN111142105B (en) ISAR imaging method for complex moving target
Ender A brief review of compressive sensing applied to radar
CN104851097B (en) The multichannel SAR GMTI methods aided in based on target shape and shade
CN105842693B (en) A kind of method of the Dual-Channel SAR moving-target detection based on compressed sensing
US9316734B2 (en) Free-hand scanning and imaging
CN109669182B (en) Passive bistatic SAR moving/static target joint sparse imaging method
CN102914773B (en) Multi-pass circumference SAR three-dimensional imaging method
CN108226928A (en) Based on the inverse synthetic aperture radar imaging method for it is expected propagation algorithm
CN110596706B (en) Radar scattering sectional area extrapolation method based on three-dimensional image domain projection transformation
CN113608218A (en) Frequency domain interference phase sparse reconstruction method based on back projection principle
CN112230221A (en) RCS (Radar Cross section) measurement method based on three-dimensional sparse imaging
CN108415017A (en) The one-dimensional augmented state-space method of complex target radar scattering characteristic sparse representation
Heister et al. Coherent large beamwidth processing of radio-echo sounding data
Thammakhoune et al. Moving target imaging for synthetic aperture radar via RPCA
Shuzhen et al. Near-field 3D imaging approach combining MJSR and FGG-NUFFT
CN112684445B (en) MIMO-ISAR three-dimensional imaging method based on MD-ADMM
Zhan et al. An ISAR imaging and cross-range scaling method based on phase difference and improved axis rotation transform
Dong et al. High-Resolution and Wide-Swath Imaging of Spaceborne SAR via Random PRF Variation Constrained by the Coverage Diagram
CN108931770B (en) ISAR imaging method based on multi-dimensional beta process linear regression
Gong et al. High resolution 3d InISAR imaging of space targets based on PFA algorithm with single baseline
Salman et al. Super-resolution object recognition approach for complex edged objects by UWB radar
Ren et al. A novel strategy for inverse synthetic aperture radar imaging based on improved compressive sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant