CN116243313A - SAR rapid intelligent sparse self-focusing technology based on distance partition - Google Patents

SAR rapid intelligent sparse self-focusing technology based on distance partition Download PDF

Info

Publication number
CN116243313A
CN116243313A CN202310251820.7A CN202310251820A CN116243313A CN 116243313 A CN116243313 A CN 116243313A CN 202310251820 A CN202310251820 A CN 202310251820A CN 116243313 A CN116243313 A CN 116243313A
Authority
CN
China
Prior art keywords
distance
sparse
focusing
sar
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310251820.7A
Other languages
Chinese (zh)
Inventor
丁泽刚
卫扬铠
汪子文
李凌豪
袁跳跳
李埔丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202310251820.7A priority Critical patent/CN116243313A/en
Publication of CN116243313A publication Critical patent/CN116243313A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes
    • G01S13/9052Spotlight mode
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

In order to solve the problems of high computational complexity and poor focusing effect in the traditional SAR imaging self-focusing method based on sparsity, a SAR rapid intelligent sparse self-focusing technology based on distance partitioning is provided. Firstly, reconstructing distance data of radar echoes, then carrying out distance partitioning, accurately reconstructing a partitioned observation matrix through oblique distance resampling, and reducing calculation and storage burden of subsequent sparse imaging through a distance partitioning strategy. Then, based on the joint constraint of the minimum entropy and the sparsity of the image, a network module is constructed, wherein the network module is formed by mutually and circularly iterating a sparse imaging module and a sparse self-focusing module, and the sparse imaging module is developed into an iterative neural network by an cross direction multiplication method (ADMM) under the motion error. The core module of sparse self-focusing is to estimate the motion error under minimum entropy constraints using a depth network. The method aims at providing a high-intelligent and high-efficiency SAR rapid intelligent sparse self-focusing scheme, and is expected to be applied to the fields of airborne, foundation SAR self-focusing and the like.

Description

SAR rapid intelligent sparse self-focusing technology based on distance partition
Technical Field
The invention aims to provide a high-intelligence and high-efficiency SAR intelligent sparse self-focusing solution, which can be expected to be used in the fields of airborne radar SAR imaging, ground-based radar SAR imaging and the like, can greatly improve the self-focusing effect and the operation speed of SAR sparse imaging, and is beneficial to improving the quality of SAR sparse images.
Background
The synthetic aperture radar (Synthetic Aperture Radar, SAR) is a high-resolution radar which can work around the clock in the whole day without being limited by weather, illumination and other conditions, and a two-dimensional high-resolution image [1] of a target imaging area is obtained in the distance direction and the azimuth direction through pulse compression and a synthetic aperture technology respectively.
SAR imaging actually aims at the problem of estimating target parameters in an imaging scene from the essence, namely, the radar actively transmits electromagnetic waves, then receives echoes of targets with different transmission coefficients in the imaging scene, and the reflection coefficients of the targets are inverted from the echoes by adopting a signal processing method. At present, the traditional SAR imaging method basically adopts a matched filtering (Matched Filtering, MF) mode to image, and energy accumulation can be obtained in a target area by constructing a specific matched signal form to perform correlation processing with radar received echoes, so that the effect of inverting a target scene is achieved, but a SAR imaging result obtained by the linear matching mode can generate a target main lobe and a side lobe, the side lobe can 'blur' real target information, and the readable information quantity of an image is reduced. In order to solve the problems, researchers combine the compressed sensing (Compressed Sensing, CS) theory with the SAR imaging theory to establish a brand new SAR imaging method: SAR sparse imaging method. Compared with the traditional SAR imaging method, the SAR sparse imaging method utilizes the sparse priori information of the target scene, solves the SAR imaging inverse problem by combining nonlinear processing knowledge, and can break through the performance limit of the traditional SAR imaging method to obtain a high-quality SAR imaging result without side lobes.
However, the SAR sparse imaging method faces two major problems: traditional SAR sparse solutions, for example: the cross direction multiplier method (Alternating Direction Method of Multipliers, ADMM) has the problem of high computational complexity and storage complexity due to the need of performing repeated iterative inversion operation; the SAR sparse imaging method is influenced by motion errors introduced by a platform, so that a reconstructed imaging result is defocused, and the readability of an image is influenced. The traditional sparse self-focusing method cannot adapt to complex imaging scenes through the idea of echo matching. Therefore, how to combine the sparse imaging method to perform self-focusing processing to quickly obtain the high-quality SAR sparse image becomes a great difficulty in the SAR sparse imaging field.
Aiming at the problems, limitations of the traditional SAR sparse imaging self-focusing method need to be improved from the precision direction of solving speed, and the invention provides a SAR rapid intelligent sparse self-focusing technology based on distance partitioning. In the invention, firstly, pulse reconstruction is carried out on distance data of radar echo, and partitioning is carried out, the observation matrix after partitioning is accurately reconstructed through inclined distance resampling, and the calculation and storage burden of subsequent sparse imaging is lightened through a distance partitioning strategy. Then, based on the joint constraint of the minimum entropy and the sparsity of the image, a network module is constructed, wherein the network module is formed by mutually and circularly iterating a sparse imaging module and a sparse self-focusing module, and the sparse imaging module is developed into an iterative neural network by an cross direction multiplication method (ADMM) under the motion error. The sparse self-focusing core module utilizes a depth network to estimate a motion error under the minimum entropy constraint, and finally obtains a SAR sparse imaging result with good focusing. The invention aims to provide a high-intelligence and high-efficiency SAR intelligent sparse self-focusing solution, which can be expected to be used in the fields of airborne radar SAR imaging, ground-based radar SAR imaging and the like, can greatly improve the self-focusing effect and the operation speed of SAR sparse imaging, and is beneficial to improving the quality of SAR sparse images.
The relevant documents retrieved are given below:
disclosure of Invention
The invention aims to overcome the defects of the prior art, solve the problems of high computational complexity and poor focusing effect in the traditional SAR sparse imaging and self-focusing method, and provides a SAR rapid intelligent sparse self-focusing technology based on distance partitioning. The specific implementation flow chart is shown in fig. 1.
The method is realized through the following steps:
step one, sparse reconstruction is carried out on the distance direction of a radar echo by a sparse reconstruction method to obtain a distance reconstruction result, and distance partitioning is carried out on the distance reconstruction result to obtain a plurality of distance partition data.
And secondly, accurately establishing a mapping relation between the distance partition data and the imaging scene partition through an oblique distance resampling strategy based on the distance partition data.
And thirdly, establishing a sparse imaging network module based on an ADMM iteration strategy, expanding an ADMM iteration process into a neural network iteration module, and learning super parameters in the iteration process to obtain a sparse imaging result.
And step four, based on the sparse imaging network module, establishing a sparse self-focusing network module based on minimum entropy constraint of the image.
And fifthly, training the iterative network module to obtain a SAR sparse imaging result with good focusing.
The invention has the advantages that:
(1) The radar data distance partition idea is established, and the mapping relation between the distance partition data and the imaging scene partition is accurately established through the oblique distance resampling strategy, so that the calculated amount and the storage amount of the follow-up SAR sparse imaging module can be greatly reduced.
(2) Aiming at the problems that the super-parameter selection is difficult and the echo matching cannot be applied to complex scenes in the traditional SAR sparse imaging and self-focusing method, a neural network based on expansion iteration and a method based on minimum image entropy are provided, the network unsupervised learning is performed through the ideas of echo loss and minimum image entropy, and the SAR sparse result with good focusing is obtained.
Drawings
FIG. 1 is a block flow diagram of a distance-partition-based SAR rapid intelligent sparse self-focusing technique
FIG. 2 is a schematic diagram of the geometrical relationship of a radar platform and a point target
FIG. 3 is a schematic diagram of a mapping relationship between distance zone data and imaging scene zones
FIG. 4 is a schematic diagram of a sparse imaging network module
FIG. 5 sparse self-focusing module schematic
FIG. 6 is a schematic view of a scene in an example of implementation
Imaging results of different methods and truth values in the example of FIG. 7
Detailed Description
Embodiments of the method of the present invention will be described in detail below with reference to the accompanying drawings and examples.
The invention relates to a SAR rapid intelligent sparse self-focusing technology based on distance partitioning, wherein a flow chart is shown in figure 1, and the specific steps comprise:
step one, sparse reconstruction is carried out on the distance direction of a radar echo by a sparse reconstruction method to obtain a distance reconstruction result, and distance partitioning is carried out on the distance reconstruction result to obtain a plurality of distance partition data.
As shown in fig. 2, which is a schematic diagram of the geometrical relationship between a radar platform and a point target, the radar receives an echo of a point p in an imaging scene as follows:
Figure BDA0004128110840000041
wherein sigma p (x, y) is the backscattering coefficient of point p, R (t) a X, y) is the distance from the radar to point p, f is the radar distance frequency point, c is the speed of light, t a Is azimuth time. Considering that there are many point targets in the scene imaging grid, the following radar echoes can be obtained:
Figure BDA0004128110840000051
where σ (x, y) represents the backscattering coefficient of the point object in the scene, R 1 And R is 2 Respectively the range of the slant distance of the imaging scene S, f 0 Is the radar start frequency. D (t) a R) represents azimuth time t a The sum of all the point targets with the distance r in the following scene can be expressed as:
Figure BDA0004128110840000052
where δ (·) is the impact function. Discretizing the imaging region and radar echo data in equation (2) can result in:
Figure BDA0004128110840000053
wherein D (t) a ,r i ) For the result of the oblique distance discrete, L is the number of the oblique distance discrete points, and the above formula is written into a matrix form:
s=A·D (5)
s is an echo receiving signal, A is a radar observation matrix, and D is a distance reconstruction result of a target scene. The distance reconstruction result D can be obtained by solving through ADMM. Partitioning the distance reconstruction result D to obtain D= [ D ] 1 ,L,D K ]K is the number of partitions.
And secondly, accurately establishing a mapping relation between the distance partition data and the imaging scene partition through an oblique distance resampling strategy based on the distance partition data.
At a certain azimuth time, the data of each range direction r is the superposition of the targets with the radar skew distance r in the imaging scene, but because the imaging grid spacing cannot be infinitely small, grid point targets with the radar skew distance r are not existed in most cases. As shown in FIG. 3, there is only one matching target along each azimuth row of the distance dimension in the scene for a certain azimuth moment and a certain skew data, and the skew r k The matching targets for the following are not necessarily on grid points, and can be expressed as:
Figure BDA0004128110840000061
wherein sigma k (p,q(p,t a,m ,r k ) If p and q are the grid azimuth and distance points, respectively, the slant distance is r k Scattering coefficient of (2). The corresponding skew on the grid points is not necessarily skew data D k (t a,m ,r k ) Instead, the component parts of (1) are mostly composed of non-grid points, and the skew information r k For our needs, we need to acquire the backscattering coefficient of the corresponding slope under the non-grid point by the slope resampling method, we use the grid point slope information around the non-grid point, and we can get the following expression:
Figure BDA0004128110840000062
where dr is the skew resolution, and the above formula is rewritten into a matrix form to obtain:
D k =A k x k (8)
so far, the accurate mapping relation between the partition data and the imaging scene partition is obtained.
And thirdly, establishing a sparse imaging network module based on an ADMM iteration strategy, expanding an ADMM iteration process into a neural network iteration module, and learning super parameters in the iteration process to obtain a sparse imaging result.
Sparse imaging module is intended to use range-zone echo D k And distance partition measurement matrix A k Acquiring SAR image x k . In the traditional SAR sparse imaging method, such as ADMM, the over-parameter selection in the iteration process very influences the imaging result, so that the applicability of the SAR sparse imaging method in SAR imaging is limited.
Thus, by expanding the iterative process of ADMM into the form of a neural network, we can learn the hyper-parameters in the iterative process to find the optimal hyper-parameters. As shown in fig. 4, which is a schematic diagram of a sparse imaging network module, we convert the three-step iterative process in ADMM into three network modules: x, Z, M can be expressed as:
Figure BDA0004128110840000071
wherein F is x (·)、F z (. Cndot.) and F m Partition of (The mapping functions of the network modules X, Z, M are shown, respectively. ρ k And lambda (lambda) k The network may learn super parameters, but these parameters are fixed in the ADMM. To form a self-supervised learning network, the echo reconstruction loss is used to learn network super-parameters, which can be expressed as:
Figure BDA0004128110840000072
wherein the method comprises the steps of
Figure BDA0004128110840000073
Representing reconstructed echo->
Figure BDA0004128110840000074
Is the reconstruction result. So far, a sparse reconstruction result can be obtained, but the result has a defocusing phenomenon, and a focusing module of the next step is required to process.
And step four, based on the sparse imaging network module, establishing a sparse self-focusing network module based on minimum entropy constraint of the image.
Self-focusing network module utilizing sparse imaging network output
Figure BDA0004128110840000081
And an observation matrix A k Estimating a distance partition error matrix E k As a result of introducing motion errors, equation (8) becomes:
D k =E k A k x k (11)
the proposed sparse self-focusing module for estimating the motion error matrix consists of a series of focusing network layers. As shown in fig. 5, the network may be represented as an input (estimated SAR image x k And distance partition measurement matrix A k ) To output (distance zone motion error matrix E k ) The mapping of (1) uses a convolutional layer and a pooling layer as follows:
Figure BDA0004128110840000082
wherein F is (m) (·) denotes the mth network module,
Figure BDA0004128110840000083
representing a motion error matrix for the nth iteration and
Figure BDA0004128110840000084
sparse imaging network output image +.>
Figure BDA0004128110840000085
As a loss function, can be expressed as:
Figure BDA0004128110840000086
wherein Net image (. Cndot.) represents the forward process of the imaging module network, en (-) represents entropy of the image, and the smaller the entropy is, the better the image focusing effect is. In each iteration process, the observation matrix is iterated, namely:
Figure BDA0004128110840000087
is used for ensuring that the final solving result can be converged. The optimal solution is realized through gradient descent:
Figure BDA0004128110840000088
all network operations take complex calculations into account and convert complex operations into real operations, such as quadrature demodulation, which can be expressed as:
Figure BDA0004128110840000091
wherein the method comprises the steps of
Figure BDA0004128110840000092
And->
Figure BDA0004128110840000093
The operations of taking out the real and imaginary parts, respectively. The sparse SAR result with good focusing can be obtained through multiple iterations of the sparse imaging module and the self-focusing module.
Thus, SAR parameterized self-focusing technology based on the generation of the countermeasure intelligent network is realized.
Examples of the embodiments
As shown in fig. 7, a schematic diagram of a simulation scene is shown, a lattice target is placed in the scene, the lattice target comprises 16 points, a real radar track is obtained by adding a motion phase error, and in order to improve universality, additive white gaussian noise is added to an echo to an imaging result signal-to-noise ratio snr=23 dB, and simulation radar parameters are shown in table 1.
Table 1 radar parameters in the example of implementation
Radar parameters Numerical value Unit (B)
Carrier frequency 10.0 GHz
Bandwidth of a communication device 1.0 GHz
Pitch angle 30 deg
Synthetic aperture length 200 m
As can be seen in fig. 7 (b), the conventional ADMM method results in defocusing of the imaging result due to motion errors. As shown in fig. 7 (c), although the conventional sparse self-focusing method has a point focusing effect, the effect is not very good and contains many noise points. In contrast, as shown in fig. 7 (d), the proposed algorithm can obtain a well-focused result with a distance block number k=8. For quantitative evaluation of the proposed algorithm index, the difference between the reconstructed result and the true result is measured by using normalized mean square error (Normalized mean square error, NMSE), defined as:
Figure BDA0004128110840000101
wherein g (i, j) and
Figure BDA0004128110840000102
the true image and the reconstruction result are respectively represented, and M and N are respectively the image sizes. In the implementation example, the evaluation indexes are shown in table 2, so that the proposed algorithm can reconstruct the target real scene faster and more accurately than other algorithms.
Table 2 example evaluation index
Method ADMM method Traditional sparse self-focusing method The method
NMSE 2.49 0.25 0.04
Run time 45s 201s 201s
Therefore, the technology provided by the invention can solve the problem that the conventional sparse self-focusing method cannot quickly and accurately focus sparse, and can well process the quick self-focusing problem in a sparse scene. In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. The SAR rapid intelligent sparse self-focusing method based on the distance partition is characterized by comprising the following steps of:
step one, sparse reconstruction is carried out on the distance direction of a radar echo by a sparse reconstruction method to obtain a distance reconstruction result, and distance partitioning is carried out on the distance reconstruction result to obtain a plurality of distance partition data.
And secondly, accurately establishing a mapping relation between the distance partition data and the imaging scene partition through an oblique distance resampling strategy based on the distance partition data.
And thirdly, establishing a sparse imaging network module based on an ADMM iteration strategy, expanding an ADMM iteration process into a neural network iteration module, and learning super parameters in the iteration process to obtain a sparse imaging result.
And step four, based on the sparse imaging network module, establishing a sparse self-focusing network module based on minimum entropy constraint of the image.
And fifthly, training the iterative network module to obtain a SAR sparse imaging result with good focusing.
2. The method for rapid intelligent sparse self-focusing on SAR based on distance partitioning according to claim 1, wherein in the first step, the geometrical relationship between the radar platform and the point target is schematically shown, and the echo of the point p in the imaging scene received by the radar is:
Figure FDA0004128110820000011
wherein sigma p (x, y) is the backscattering coefficient of point p, R (t) a X, y) is the distance from the radar to point p, f is the radar distance frequency point, c is the speed of light, t a Is azimuth time.
3. The distance-partition-based SAR rapid intelligent sparse self-focusing method of claim 1, wherein in the second step, a certain azimuth moment and a certain skew data have only one matching target along each azimuth row of a distance dimension in a scene, and the skew r k The matching targets for the following are not necessarily on grid points, denoted as:
Figure FDA0004128110820000021
wherein sigma k (p,q(p,t a,m ,r k ) If p and q are the grid azimuth and distance points, respectively, the slant distance is r k Is a scattering coefficient of (a) is provided.
4. The distance-partition-based SAR rapid intelligent sparse self-focusing method according to claim 1, wherein the three-step iterative process in ADMM in step three is converted into three network modules: x, Z, M, expressed as:
Figure FDA0004128110820000022
Figure FDA0004128110820000023
Figure FDA0004128110820000024
wherein F is x (·)、F z (. Cndot.) and F m (. Cndot.) represents the mapping functions of the network modules X, Z, M, respectively. ρ k And lambda (lambda) k The network may learn super parameters, but these parameters are fixed in the ADMM. To form a self-supervised learning network, echo reconstruction loss is used to learn network superparameters, expressed as:
Figure FDA0004128110820000025
wherein the method comprises the steps of
Figure FDA0004128110820000026
Representing reconstructed echo->
Figure FDA0004128110820000027
Is the reconstruction result.
5. The distance-partition-based SAR rapid intelligent sparse self-focusing method according to claim 1, wherein the mapping of input to output in step four uses a convolutional layer and a pooling layer as follows:
Figure FDA0004128110820000028
wherein F is (m) (·) denotes the mth network module,
Figure FDA0004128110820000029
represents the motion error matrix of the nth iteration and +.>
Figure FDA00041281108200000210
Sparse imaging network output image +.>
Figure FDA00041281108200000211
As a loss function, expressed as:
Figure FDA00041281108200000212
wherein Net image (. Cndot.) represents the forward process of the imaging module network and En (-) represents entropy of the image.
CN202310251820.7A 2023-03-08 2023-03-08 SAR rapid intelligent sparse self-focusing technology based on distance partition Pending CN116243313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310251820.7A CN116243313A (en) 2023-03-08 2023-03-08 SAR rapid intelligent sparse self-focusing technology based on distance partition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310251820.7A CN116243313A (en) 2023-03-08 2023-03-08 SAR rapid intelligent sparse self-focusing technology based on distance partition

Publications (1)

Publication Number Publication Date
CN116243313A true CN116243313A (en) 2023-06-09

Family

ID=86624058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310251820.7A Pending CN116243313A (en) 2023-03-08 2023-03-08 SAR rapid intelligent sparse self-focusing technology based on distance partition

Country Status (1)

Country Link
CN (1) CN116243313A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117148347A (en) * 2023-06-13 2023-12-01 中国人民解放军空军预警学院 Two-dimensional joint imaging and self-focusing method based on deep learning network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117148347A (en) * 2023-06-13 2023-12-01 中国人民解放军空军预警学院 Two-dimensional joint imaging and self-focusing method based on deep learning network

Similar Documents

Publication Publication Date Title
CN109100718B (en) Sparse aperture ISAR self-focusing and transverse calibration method based on Bayesian learning
CN105842694B (en) A kind of self-focusing method based on FFBP SAR imagings
CN104950305B (en) A kind of real beam scanning radar angle super-resolution imaging method based on sparse constraint
CN106772365A (en) A kind of multipath based on Bayes's compressed sensing utilizes through-wall radar imaging method
CN106405548A (en) Inverse synthetic aperture radar imaging method based on multi-task Bayesian compression perception
CN108008385A (en) Interference environment ISAR high-resolution imaging methods based on management loading
CN112946601B (en) Gauss-Seidel-based efficient distributed target phase optimization method
CN111948652B (en) SAR intelligent parameterized super-resolution imaging method based on deep learning
CN116243313A (en) SAR rapid intelligent sparse self-focusing technology based on distance partition
CN108646247A (en) Inverse synthetic aperture radar imaging method based on Gamma process linear regression
CN112147608A (en) Rapid Gaussian gridding non-uniform FFT through-wall imaging radar BP method
CN117129994B (en) Improved backward projection imaging method based on phase compensation nuclear GNSS-SAR
CN113608218A (en) Frequency domain interference phase sparse reconstruction method based on back projection principle
CN108931770B (en) ISAR imaging method based on multi-dimensional beta process linear regression
CN105068071B (en) A kind of fast imaging method based on backprojection operator
CN117289274A (en) Single-channel forward-looking super-resolution imaging method based on optimized self-adaptive matching tracking
CN114966687A (en) Sparse ISAR imaging method and system based on low rank and non-local self-similarity
CN112946644B (en) Based on minimizing the convolution weight l1Norm sparse aperture ISAR imaging method
CN115421115A (en) Weight-weighted alternating direction multiplier method for combining phase correction and ISAR imaging
Zaydullin et al. Motion Correction Using Deep Learning Neural Networks-Effects of Data Representation
CN114720984B (en) SAR imaging method oriented to sparse sampling and inaccurate observation
CN113126095B (en) Two-dimensional ISAR rapid imaging method based on sparse Bayesian learning
CN114879188B (en) Model-adaptive deep learning SAR three-dimensional imaging method
CN117289272B (en) Regularized synthetic aperture radar imaging method based on non-convex sparse optimization
CN118688799A (en) Forward-looking three-dimensional super-resolution imaging method for high-speed motion platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination