CN111537999A - Robust and efficient decomposition projection automatic focusing method - Google Patents

Robust and efficient decomposition projection automatic focusing method Download PDF

Info

Publication number
CN111537999A
CN111537999A CN202010143982.5A CN202010143982A CN111537999A CN 111537999 A CN111537999 A CN 111537999A CN 202010143982 A CN202010143982 A CN 202010143982A CN 111537999 A CN111537999 A CN 111537999A
Authority
CN
China
Prior art keywords
sub
image
aperture
data
polar coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010143982.5A
Other languages
Chinese (zh)
Other versions
CN111537999B (en
Inventor
沈龙
王家豪
裴凌
马仪
周仿荣
马御棠
张旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Yunnan Power Grid Co Ltd filed Critical Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority to CN202010143982.5A priority Critical patent/CN111537999B/en
Publication of CN111537999A publication Critical patent/CN111537999A/en
Application granted granted Critical
Publication of CN111537999B publication Critical patent/CN111537999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9004SAR image acquisition techniques
    • G01S13/9017SAR image acquisition techniques with time domain processing of the SAR signals in azimuth

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application discloses a decomposed projection automatic focusing method, which comprises the steps of obtaining original data in the whole synthetic aperture time, and dividing the original data into N pieces of sub-aperture data; carrying out focusing imaging processing on each sub-aperture data to obtain a first sub-image; registering the first sub-image in a ground virtual polar coordinate system to obtain registered image data; and performing error correction and subimage fusion on the registered image data by adopting a rapid coordinate descent method to obtain second subimage data, and outputting a focusing imaging result if the second subimage data is a subimage. According to the method, the PGA algorithm is applied to the ground virtual polar coordinate system, the problem that the traditional frequency domain method is incompatible with the time domain method is solved, automatic focusing in the FFBP algorithm is achieved, the sub-aperture image focusing precision is effectively improved, the residual error phase error model of the sub-image is established, the method is suitable for the unstable strip mode of the radar platform, the error estimation precision is improved, and the calculation complexity is reduced.

Description

Robust and efficient decomposition projection automatic focusing method
Technical Field
The invention relates to the technical field of signal processing, in particular to a robust and efficient decomposition projection automatic focusing method which is suitable for focusing imaging processing of data acquired by SAR platforms with inconstant motion speed and irregular tracks, such as helicopters, airships and the like.
Background
The satellite-borne Synthetic Aperture Radar (SAR) is one of the most rapidly and effectively developed sensors in microwave remote sensing equipment, and can be used as an active sensor which is not limited by illumination and climatic conditions and can realize all-time and all-weather earth observation.
So far, a back projection algorithm (BP algorithm) is common in an SAR imaging algorithm for obtaining a focused SAR image, which is a time domain algorithm, and the method depends on an obtained high-precision antenna phase center position path, and can be applied to various SAR systems with different wavelengths, bandwidths and working modes. However, the traditional BP algorithm cannot meet the actual requirement due to high computational complexity and low focusing imaging speed.
With respect to the problems of the conventional BP algorithm, in recent years, many improved BP algorithms proposed by scholars strive to reduce the computational complexity. Representative algorithms include a fast back projection algorithm (FBP) which reduces computational complexity by imaging sub-images in polar coordinates with a small number of meshes. Based on the FBP algorithm, a fast factorization back projection SAR self-focusing method is proposed (for example, in a patent document ' a fast factorization back projection SAR self-focusing method ' (patent application number: 2017100935722, application publication number: CN106802416A) '), a fast factorization back projection SAR self-focusing method is proposed, the method is combined with the fast factorization back projection imaging algorithm (FFBP) to establish a phase error optimization model, however, when the method is used for carrying out focusing imaging processing on an image, the traditional time domain image method-based automatic focusing method is adopted, and the calculation complexity is still large; as another example, in "a self-focusing method based on FFBPSAR imaging" (patent application No. 201610177551.4, application publication No. 105842694B), a self-focusing method based on FFBPSAR imaging is proposed, which is based on an FFBP framework, extracts phase gradient information of a motion error by using phase difference information of adjacent sub-apertures of a point target, estimates the motion error by an integration method and compensates the motion error, thereby implementing self-focusing processing on a SAR image.
Disclosure of Invention
The invention aims to provide a robust and efficient decomposition projection automatic focusing method to solve the problems of high calculation complexity, large deformation error of a calibration image, low phase error estimation precision and the like of the algorithm in the prior art, and the algorithm provided by the invention improves the focusing effect of the algorithm and can obtain an image with higher quality.
The application provides a robust and efficient decomposition projection automatic focusing method, which comprises the following steps:
acquiring original data in the whole synthetic aperture time, and dividing the original data into N sub-aperture data;
carrying out focusing imaging processing on each sub-aperture data to obtain a first sub-image;
registering the first sub-image in a ground virtual polar coordinate system to obtain registered image data;
error correction and subimage fusion are carried out on the registered image data by adopting a rapid coordinate descent method to obtain second subimage data;
judging whether the second sub-image data is the only sub-image, if so, outputting the sub-image as a focusing imaging result; if not, continuing to perform the step of registering the first sub-image.
The method provided by the application at least has the following beneficial effects:
first, the invention applies the PGA algorithm to the ground virtual polar coordinate system, overcomes the incompatibility problem of the traditional frequency domain method and the time domain method, realizes the automatic focusing in the FFBP algorithm, effectively improves the sub-aperture image focusing precision, and reduces the calculation complexity.
Second, the present invention improves the accuracy of the final imaging result by using image registration for correction system calibration.
Thirdly, the invention establishes a residual error phase error model of the subimages, estimates the phase error by using a Fast Coordinate Descent (FCD) method, compensates the phase error, and combines all the subimages into a full-resolution focused image, thereby being suitable for a strip mode with unstable radar platform, improving the error estimation precision and greatly reducing the calculation complexity.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a flow chart of a method provided herein;
FIG. 2 is an exploded view of step S20 of the method of FIG. 1;
FIG. 3 is a diagram showing the result of imaging measured data according to the method of the present invention;
FIG. 4 is a diagram of the result of the existing FFBP method imaging measured data;
FIG. 5 is a graph showing the imaging results of the corner reflector in the imaging of measured data according to the method of the present invention;
FIG. 6 is a diagram illustrating the imaging result of a corner reflector in the imaging of measured data by the FPA method according to the prior art;
FIG. 7 is a graph of the result of the imaging of measured data by the method of the present invention;
FIG. 8 is a diagram of the result of the existing FPA method for imaging the measured data;
FIG. 9 is a graph comparing the imaging performance of the method of the present invention with that of the prior art.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, it is a flowchart of a robust and efficient decomposition projection auto-focusing method according to the present application;
as can be seen from fig. 1, the embodiment of the present application provides a robust and efficient decomposition projection auto-focusing method, which specifically includes the following steps:
s10: acquiring original data in the whole synthetic aperture time, and dividing the original data into N sub-aperture data;
in this embodiment, the implementation of this step is mainly based on the basic principle of fast factorization back projection imaging algorithm (hereinafter referred to as FFBP algorithm), and specifically, this step can be detailed as follows:
according to the principle of the FFBP algorithm, the whole synthetic aperture is divided into N shorter sub-apertures by taking 2 as a base, each sub-aperture comprises sub-aperture data, and the length of the sub-aperture is 1, so that the subsequent processing and calculation are convenient.
Further, after dividing the raw data into N sub-aperture data, the method further comprises:
performing distance compression processing on each sub-aperture data to obtain N sub-aperture data after distance compression; the compression degree and the compression process are not limited in this embodiment.
And then, projecting each sub-aperture data after distance compression processing to a sub-aperture virtual polar coordinate with the center of the sub-aperture as an origin.
S20: carrying out focusing imaging processing on each sub-aperture data to obtain a first sub-image;
in this embodiment, an improved PGA algorithm is used to perform focusing imaging on sub-aperture data to generate N focused sub-image data, which is recorded as a first sub-image, and may also be referred to as a primary sub-image; the PGA algorithm is also named as a phase gradient algorithm, and the sub-aperture focusing formula in the conventional FFBP algorithm is improved to form the improved PGA algorithm, so that the problems of improving the focusing precision and the like are solved.
Specifically, as shown in fig. 2, the determining step of the improved PGA algorithm includes:
s21: constructing a radar and target slant distance equation under the sub-aperture virtual polar coordinate, which is shown as the following formula:
Figure BDA0002400078510000031
wherein R represents the slant distance between the radar and the target P, RpIs the polar diameter, omega, of the target P under the sub-aperture virtual polar coordinate systempRepresenting the projection vector of the target P under the sub-aperture virtual polar coordinate system,
Figure BDA0002400078510000032
represents a square-on operation ·2Representing squaring operation, v representing the motion speed of the radar platform, and t representing azimuth time;
s22: determining a sub-aperture focusing imaging formula under a ground virtual polar coordinate;
firstly, a sub-aperture focusing formula in an FFBP algorithm needs to be rewritten, and the Fourier transform pair relation of an image domain and a frequency domain under a sub-aperture virtual polar coordinate is determined:
the known sub-aperture focusing formula is shown below:
I(rp,kΩ)=∫exp(-jkΩvt)dt=2LSinc(kΩvt)
wherein I denotes a focused sub-image, rpIs the polar diameter, k, of the target P in the sub-aperture virtual polar coordinate systemΩIs the wave number, [ integral ] gdt is the integral operation, exp (g) is the operation of taking the exponent, j is the unit of imaginary number, v represents the radar platform motion speed, t represents the azimuth time, L is the length of the integral interval, Sinc (g) is the operation of taking the sine function.
The above equation shows that the relationship between the image domain and the frequency domain of the focusing result in the sub-aperture virtual polar coordinate satisfies the fourier transform pair principle, so that it can be proved that the PGA method can be used for sub-aperture focusing imaging of the FFBP algorithm.
Then, determining a sub-aperture focusing imaging formula under the ground virtual polar coordinate:
ignoring the amplitude information of the sub-image, the previous sub-aperture focusing formula can be rewritten as:
Figure BDA0002400078510000041
wherein I denotes a focused sub-image, rpIs the polar diameter, k, of the target P in the sub-aperture virtual polar coordinate systemΩIs wavenumber, Sinc (g) operation by sine function, v represents the moving speed of radar platform, t is time in azimuth, exp (g) operation by exponent, j is unit of imaginary number, r is unit of imaginary numbernearThe method is characterized in that the nearest distance between a target P and a radar running track under a sub-aperture virtual polar coordinate system is shown, lambda is the radar working wavelength, cos (g) is cosine-taking operation, and theta is a polar angle under the sub-aperture virtual polar coordinate system.
Let theta equal to thetap+ Δ θ, the following holds:
Figure BDA0002400078510000042
wherein cos (g) is cosine operation, theta is polar angle under the sub-aperture virtual polar coordinate system, and thetapIs the polar angle of the target P under the sub-aperture virtual polar coordinate, delta theta is the polar angle variation, omega is the projection vector under the sub-aperture virtual polar coordinate system, omega ispIs the projective vector of the target P under the sub-aperture virtual polar coordinate system, sin (g) is the sine operation.
Since the main energy distribution of the target is within a small range, that is, cos (Δ θ) ≈ 1 and sin (Δ θ) ≈ Δ θ. Thus, the aforementioned sub-aperture focusing imaging formula can be expressed as
Figure BDA0002400078510000043
Wherein I denotes a focused sub-image, rpIs the polar diameter, k, of the target P in the sub-aperture virtual polar coordinate systemΩIs wave number, Sinc (g) is operated by sine function, v represents the moving speed of the radar platform, t is the azimuth time, exp (g) is operated by index, j is imaginary number unit, theta is polar angle under the virtual polar coordinate of sub-aperture, cos (g) is operated by cosine, lambda is the working wavelength of the radar, r is polar diameter under the virtual polar coordinate of sub-aperture, tan (g) is operated by tangent, theta is operated by tangentpIs the polar angle of the target P in the sub-aperture virtual polar coordinates.
And because the conversion relation between the ground virtual polar coordinate and the sub-aperture virtual polar coordinate is as follows:
Figure BDA0002400078510000051
wherein r is the polar diameter under the sub-aperture virtual polar coordinate system, r' is the polar diameter under the ground virtual polar coordinate system, thetaincIs the radar incident angle, sin (g) is the sine taking operation, atan (g) is the arc tangent taking operation, h is the height of the radar from the ground, Ω is the projection vector under the sub-aperture virtual polar coordinate system, and Ω' is the projection vector under the ground virtual polar coordinate system.
θincCorresponding to r', the sub-aperture focusing imaging formula can be finally expressed as:
Figure BDA0002400078510000052
k′Ω=4π(Ω'-Ω'p)/λ;
wherein I represents the focused sub-image, and r ' is the polar diameter, k ' in the ground virtual polar coordinate system 'ΩIs the wavenumber, Sinc (g) operation of sine function, exp (g) operation of index, j is the unit of imaginary number, rpIs the polar diameter of the target P in the sub-aperture virtual polar coordinate system, cos (g) is the cosine operation, theta is the polar angle in the sub-aperture virtual polar coordinate system, lambda is the radar operating wavelength, r'The diameter of the pole in the virtual polar coordinate system of the ground, tan (g) is the tangent operation, thetapIs the polar angle of the target P under the sub-aperture virtual polar coordinate, omega' is the projective vector under the ground virtual polar coordinate system, omegapIs the projective vector of the target point P under the ground virtual polar coordinate system.
The above formula is a sub-aperture focusing imaging formula under the ground virtual polar coordinate, and the formula shows that the image domain and the frequency domain in the ground virtual polar coordinate system also satisfy the Fourier transform pair relationship.
Meanwhile, as can also be seen from fig. 2, when the above-mentioned improved PGA algorithm is applied to the sub-aperture data in step S10, step S20 may further be divided into:
s23: projecting the subaperture data under each subaperture virtual polar coordinate to a ground virtual polar coordinate system;
s24: focusing and imaging each sub-aperture data by adopting a PGA algorithm under a ground virtual polar coordinate; the improved PGA algorithm adopted in this step, i.e., obtained in steps S21-S22, finally obtains the first sub-image (also referred to as the first-level sub-aperture data) focused and imaged in the virtual polar coordinate system on the ground.
Next, as can be seen from fig. 1, the processes of registration, correction, and fusion of the first sub-image are required.
S30: registering the first sub-image in a ground virtual polar coordinate system to obtain registered image data;
in this embodiment, step S30 may be further divided into two sub-steps of coarse registration and fine registration, specifically:
firstly, rough sub-image registration is carried out, an optical image of an illuminated target scene is selected as a reference image, and then N sub-images are respectively merged into the reference optical image to obtain N rough registration sub-images after rough registration;
then, fine sub-image registration is performed, and based on the obtained coarse registration sub-image, fine image registration is performed by using a conventional registration method (other methods may be used, but not limited in this embodiment) such as a maximum correlation method, so as to obtain N sub-images after fine registration, which are used as registration image data.
S40: error correction and subimage fusion are carried out on the registered image data by adopting a rapid coordinate descent method to obtain second subimage data;
the principle of the rapid descent method is as follows:
s41: images obtained in discrete form:
Figure BDA0002400078510000061
where I represents the image in discrete form,
Figure BDA0002400078510000062
representing a summing operation, bnRepresenting the n-th registered sub-image.
S42: introducing an optimal phase correction factor:
considering the presence of phase errors, the discrete form image can be written as:
Figure BDA0002400078510000063
wherein
Figure BDA0002400078510000064
A discrete form of the out-of-focus image is represented,
Figure BDA0002400078510000065
it is indicated that the summing operation is performed,
Figure BDA0002400078510000066
indicating the nth sub-image with phase error. bnIndicating that there is no phase error, e indicates a natural logarithm, j indicates an imaginary unit,
Figure BDA0002400078510000067
indicating the error phase.
The aim of the autofocus method is to obtain an estimated phase error
Figure BDA0002400078510000068
And compensates the phase error to the defocused image. Image definition xi | | | I | | non-conducting phosphor2And the phase of the error is in inverse proportion, and the more accurate the estimated phase error is, the greater the image definition is. Thus, the optimal phase correction factor can be written as:
Figure BDA0002400078510000069
the discrete image with the introduced correction factor can be written as:
Figure BDA00024000785100000610
wherein
Figure BDA00024000785100000611
Representing the focus imaging results in discrete form, ∑ g representing the summation operation,
Figure BDA00024000785100000612
indicating the nth sub-image with phase error, e indicating the natural logarithm, j indicating the imaginary unit,
Figure BDA0002400078510000071
representing the estimated error phase of the nth sub-image,
Figure BDA0002400078510000072
denotes the n-th0A sub-image in which a phase error exists,
Figure BDA0002400078510000073
denotes the n-th0Error phase of the amplitude sub-image.
S43: order to
Figure BDA0002400078510000074
It is possible to obtain,
Figure BDA0002400078510000075
the ith figureThe sharpness of the sub-image can be expressed as:
Figure RE-GDA0002571450470000078
wherein gamma isiIndicating the sharpness of the ith sub-image,
Figure BDA0002400078510000077
a sub-image representing the i-th phase error correction, | g | represents a take absolute value operation,
Figure BDA0002400078510000078
represents the operation of the real part, represents the operation of conjugate, e represents the natural logarithm, j represents the unit of imaginary number,
Figure BDA0002400078510000079
indicating the estimated error phase of the nth sub-image. Thus, gamma can be obtained by CD algorithmiCan obtain the phase error correction factor of the sub-image
Figure BDA00024000785100000710
S44: constructing an applicable condition of a rapid coordinate descent algorithm:
discrete focused image obtained by GPA and registration
Figure BDA00024000785100000711
Can be written as:
Figure RE-GDA00025714504700000713
wherein
Figure RE-GDA00025714504700000714
Representing the m-th focused sub-image. Since GPA algorithm processing has been performed, N sub-images have constant residual constant phase error
Figure RE-GDA00025714504700000715
And thus can be processed by a fast coordinate descent algorithm.
Further, the steps of correcting the phase error and fusing the sub-images by using the fast coordinate descent algorithm may be divided into:
s45: acquiring registered ith-level image data;
s46: constructing a subscript value sequence of the sub-images: m ═ 1,3,5, …, N-1], total N/2 elements in the sequence; when correcting for the first time, j is 1;
s47: will MjAmplitude image and Mj+1The amplitude sub-image is corrected for phase error by using a coordinate descent method algorithm to generate an i +1 th level sub-image of the j frame
Figure RE-GDA00025714504700000716
K in (a) is respectively equal to MjAnd Mj+1Using a coordinate descent method pair
Figure RE-GDA00025714504700000717
Performing estimation wheniAt maximum, take the time
Figure RE-GDA00025714504700000718
As sub-image MjAnd Mj+1And generating the Mth sub-image in the (i + 1) th level sub-image by the image fusion technologyj+1And 2 data.
S50: judging whether the second sub-image data is the only sub-image, if so, outputting the sub-image as a focusing imaging result; if not, continuing to perform the step of registering the first sub-image.
Specifically, step S50 includes:
firstly, judging whether j is greater than N/2, if so, updating the number of sub-images
Figure BDA0002400078510000081
If not, the step S46 is executed again by changing j to j + 1;
judging whether N is equal to 1, if so, determining the second subimage data as the only subimage; if not, let i be i +1, and the step of S30 is executed again.
According to the technical scheme, the robust and efficient decomposition projection automatic focusing method is provided, the problem that a traditional frequency domain method is incompatible with a time domain method is solved by applying a PGA algorithm to a ground virtual polar coordinate system, automatic focusing in an FFBP algorithm is achieved, sub-aperture image focusing accuracy is effectively improved, and computational complexity is reduced.
Based on the method provided above, experiments are performed in combination with practical application scenarios, and experimental data (data basic parameter information) are as follows:
Figure BDA0002400078510000082
the experimental procedure and experimental results are as follows:
firstly, acquiring radar original data;
secondly, carrying out focusing imaging processing on the data by using the method provided by the invention to obtain an actual measurement data imaging result;
and thirdly, comparing results.
Comparing the simulation data imaging result of the method with the existing FFBP method, the comparison results are respectively shown in fig. 3 and fig. 4, and it can be known that the imaging result obtained by the method is clearer, the point target distribution is more uniform, the brightness is higher, the imaging quality is better, compared with the FFBP method and the result of the method, the peak side lobe ratio is-7.91 dB and-13.01 dB respectively, the peak side lobe ratio of the method is lower, and the focusing effect is better.
Comparing the measured data imaging result of the method with the existing FPA method, the comparison result is shown in fig. 5 and fig. 6, and it can be seen that the corner reflector imaging result obtained by the method of the invention has more concentrated energy, good focusing effect, and the energy of the FPA method is dispersed and defocused seriously.
Comparing the imaging result of the corner reflector in the actual measurement data imaging with the existing FPA method, the comparison result is shown in fig. 7 and 8, and it can be seen that the imaging result obtained by the method of the invention is clearer, the detail reduction degree is high, the light and dark contrast of the imaging result is stronger, the result defocusing of the FPA method is more serious, and the result is blurry.
Comparing the performance of the imaging result of the method of the present invention with that of the existing method, the comparison result is shown in fig. 9, which shows that the energy of the imaging result obtained by the method of the present invention is most concentrated, the second order of the ASH method is the worst, and the FGA imaging effect is the worst.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (9)

1. A robust and efficient decomposition projection auto-focusing method, the method comprising:
acquiring original data in the whole synthetic aperture time, and dividing the original data into N sub-aperture data;
carrying out focusing imaging processing on each sub-aperture data to obtain a first sub-image;
registering the first sub-image in a ground virtual polar coordinate system to obtain registered image data;
error correction and subimage fusion are carried out on the registered image data by adopting a rapid coordinate descent method to obtain second subimage data;
judging whether the second sub-image data is the only sub-image, if so, outputting the sub-image as a focusing imaging result; if not, continuing to perform the step of registering the first sub-image.
2. The method of claim 1, wherein the dividing the raw data into N sub-aperture data comprises:
dividing the whole synthetic aperture into N shorter sub-apertures by taking 2 as a base number according to a fast decomposition back projection algorithm, wherein each sub-aperture comprises sub-aperture data; wherein the sub-aperture has a length of 1.
3. The method of claim 2, wherein after separating the raw data into N sub-aperture data, the method further comprises:
performing distance compression processing on each sub-aperture data;
and projecting each sub-aperture data after the distance compression processing to a sub-aperture virtual polar coordinate with the center of the sub-aperture as the origin.
4. The method according to claim 3, wherein the focused imaging process employs a modified PGA algorithm; the step of performing a focusing imaging process on each sub-aperture data to obtain a first sub-image comprises:
projecting the subaperture data under each subaperture virtual polar coordinate to a ground virtual polar coordinate system;
and focusing and imaging each sub-aperture data by adopting a PGA algorithm under the ground virtual polar coordinate.
5. The method according to claim 4, wherein the step of determining the modified PGA algorithm comprises:
constructing a radar and target slant distance equation under the sub-aperture virtual polar coordinate:
Figure FDA0002400078500000011
wherein R represents the slant distance between the radar and the target P, RpIs the polar diameter, omega, of the target P under the sub-aperture virtual polar coordinate systempRepresenting the projection vector of the target P under the sub-aperture virtual polar coordinate system,
Figure FDA0002400078500000012
represents a square-on operation ·2Representing a squaring operation, v representing the motion speed of the radar platform, and t representing the azimuth time;
determining a sub-aperture focusing imaging formula under a ground virtual polar coordinate:
Figure FDA0002400078500000013
k′Ω=4π(Ω′-Ω′p)/λ;
wherein I represents the focused sub-image, and r ' is the polar diameter, k ' in the ground virtual polar coordinate system 'ΩIs the wavenumber, Sinc (g) operation of sine function, exp (g) operation of index, j is the unit of imaginary number, rpIs the polar diameter of the target P under the sub-aperture virtual polar coordinate system, cos (g) is cosine operation, theta is the polar angle under the sub-aperture virtual polar coordinate system, lambda is the radar working wavelength, r' is the polar diameter under the ground virtual polar coordinate system, tan (g) is tangent operation, thetapIs the polar angle of the target P in the sub-aperture virtual polar coordinate system, and Ω 'is the projective vector in the ground virtual polar coordinate system, Ω'pIs the projective vector of the target point P under the ground virtual polar coordinate system.
6. The method of claim 5, wherein registering the first sub-image in the ground virtual polar coordinate system to obtain the registered image data comprises:
respectively merging the N first sub-images with the reference image to obtain N rough registration sub-images;
and carrying out accurate registration on the rough registration sub-images to obtain registration image data.
7. The method of claim 6, wherein the step of performing error correction and sub-image fusion on the registered image data using fast coordinate descent to obtain second sub-image data comprises:
acquiring registered ith-level image data;
constructing a subscript value sequence of the sub-images: m ═ 1,3,5, …, N-1], total N/2 elements in the sequence;
will MjAmplitude image and Mj+1The amplitude sub-image is corrected for phase error by using a coordinate descent method algorithm to generate an i +1 th level sub-image of the j amplitude, and an Mth sub-image in the i +1 th level sub-image is generated by using an image fusion technologyj+1The/2 data as the second sub-image data.
8. The method of claim 7, wherein the step of determining whether the second sub-image data is the only one sub-image comprises:
judging whether j is larger than N/2, if so, updating the number of the sub-images
Figure FDA0002400078500000021
If not, making j equal to j +1, and re-executing the step of constructing the subscript value sequence of the sub-image;
judging whether N is equal to 1, if so, determining the second subimage data as the only subimage; and if not, making i equal to i +1, and re-performing the step of registering the first sub-image in the ground virtual polar coordinate system to obtain the registered image data.
9. The method of claim 8, wherein the fast coordinate descent method comprises:
introducing an optimal phase correction factor into the discrete form image to obtain:
Figure RE-FDA0002571450460000022
where I denotes the discrete form of the focused imaging result, ∑ g denotes the summing operation,
Figure RE-FDA0002571450460000023
indicating the nth sub-image with phase error, e indicating the natural logarithm, j indicating the imaginary unit,
Figure RE-FDA0002571450460000024
representing the estimated error phase of the nth sub-image,
Figure RE-FDA0002571450460000025
denotes the n-th0A sub-image in which a phase error exists,
Figure RE-FDA0002571450460000026
denotes the n-th0Error phase of the amplitude sub-image;
calculating the definition of the ith sub-image:
Figure RE-FDA0002571450460000027
wherein upsiloniIndicating the sharpness of the ith sub-image,
Figure RE-FDA0002571450460000028
a sub-picture representing the i-th phase error correction,
Figure RE-FDA0002571450460000029
representing the operation of the real part and representing the operation of taking the conjugate;
constructing an applicable condition of a rapid coordinate descent algorithm:
Figure RE-FDA00025714504600000210
wherein
Figure RE-FDA00025714504600000211
Representing the m-th focused sub-image.
CN202010143982.5A 2020-03-04 2020-03-04 Robust and efficient decomposition projection automatic focusing method Active CN111537999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010143982.5A CN111537999B (en) 2020-03-04 2020-03-04 Robust and efficient decomposition projection automatic focusing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010143982.5A CN111537999B (en) 2020-03-04 2020-03-04 Robust and efficient decomposition projection automatic focusing method

Publications (2)

Publication Number Publication Date
CN111537999A true CN111537999A (en) 2020-08-14
CN111537999B CN111537999B (en) 2023-06-30

Family

ID=71974789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010143982.5A Active CN111537999B (en) 2020-03-04 2020-03-04 Robust and efficient decomposition projection automatic focusing method

Country Status (1)

Country Link
CN (1) CN111537999B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113514827A (en) * 2021-03-03 2021-10-19 南昌大学 Synthetic aperture radar imaging processing method and application in unmanned aerial vehicle cluster mode
CN114578355A (en) * 2022-03-03 2022-06-03 西安电子科技大学 Rapid time domain imaging method for hypersonic aircraft synthetic aperture radar
CN115453530A (en) * 2022-08-11 2022-12-09 南京航空航天大学 Bistatic SAR (synthetic aperture radar) filtering back-projection two-dimensional self-focusing method based on parameterized model

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300599A1 (en) * 2012-05-11 2013-11-14 Raytheon Company On-Board INS Quadratic Correction Method Using Maximum Likelihood Motion Estimation Of Ground Scatterers From Radar Data
CN104316924A (en) * 2014-10-15 2015-01-28 南京邮电大学 Autofocus motion compensation method of airborne ultra-high resolution SAR (Synthetic Aperture Radar) back projection image
CN104391297A (en) * 2014-11-17 2015-03-04 南京航空航天大学 Sub-aperture partition PFA (Polar Format Algorithm) radar imaging method
US20150062446A1 (en) * 2012-05-08 2015-03-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Projection display with multi-channel optics with non-circular overall aperture
CN104730520A (en) * 2015-03-27 2015-06-24 电子科技大学 Circumference SAR back projection self-focusing method based on subaperture synthesis
CN104793196A (en) * 2015-04-28 2015-07-22 西安电子科技大学 Real-time SAR (synthetic aperture radar) imaging method based on improved range migration algorithm
CN104833973A (en) * 2015-05-08 2015-08-12 电子科技大学 Linear array SAR backward projection self-focusing imaging method based on positive semi-definite programming
CN105842694A (en) * 2016-03-23 2016-08-10 中国电子科技集团公司第三十八研究所 FFBP SAR imaging-based autofocus method
CN106802416A (en) * 2017-02-21 2017-06-06 电子科技大学 A kind of quick factorization rear orientation projection SAR self-focusing methods
CN107748362A (en) * 2017-10-10 2018-03-02 电子科技大学 A kind of quick autohemagglutination focusing imaging methods of linear array SAR based on maximum sharpness
CN108562898A (en) * 2018-04-17 2018-09-21 西安电子科技大学 A kind of front side regards the distance and bearing bidimensional space-variant self-focusing method of SAR
CN109031295A (en) * 2018-07-17 2018-12-18 中国人民解放军国防科技大学 ISAR image registration method based on wave path difference compensation
CN109031301A (en) * 2018-09-26 2018-12-18 云南电网有限责任公司电力科学研究院 Alpine terrain deformation extracting method based on PSInSAR technology
CN109085589A (en) * 2018-10-16 2018-12-25 中国人民解放军国防科技大学 Sparse aperture ISAR imaging phase self-focusing method based on image quality guidance
CN109270529A (en) * 2018-12-07 2019-01-25 电子科技大学 Forward sight array SAR high-resolution imaging method and system based on virtual-antenna
US20190089892A1 (en) * 2017-09-21 2019-03-21 Canon Kabushiki Kaisha Image pickup apparatus having function of correcting defocusing and method for controlling the same
CN109521425A (en) * 2019-01-30 2019-03-26 云南电网有限责任公司电力科学研究院 A kind of SAR difference chromatography method and device
CN109799502A (en) * 2019-01-28 2019-05-24 南京航空航天大学 A kind of bidimensional self-focusing method suitable for filter back-projection algorithm
CN110095775A (en) * 2019-04-29 2019-08-06 西安电子科技大学 The platform SAR fast time-domain imaging method that jolts based on mixed proportion
CN110095787A (en) * 2019-05-25 2019-08-06 西安电子科技大学 SAL full aperture imaging method based on MEA and deramp
CN110146857A (en) * 2019-05-17 2019-08-20 西安电子科技大学 One kind is jolted platform SAR three-dimensional motion error estimation
CN110554385A (en) * 2019-07-02 2019-12-10 中国航空工业集团公司雷华电子技术研究所 Self-focusing imaging method and device for maneuvering trajectory synthetic aperture radar and radar system
CN110806577A (en) * 2019-11-06 2020-02-18 中国科学院电子学研究所 Focusing imaging method and device of synthetic aperture radar, equipment and storage medium

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150062446A1 (en) * 2012-05-08 2015-03-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Projection display with multi-channel optics with non-circular overall aperture
US20130300599A1 (en) * 2012-05-11 2013-11-14 Raytheon Company On-Board INS Quadratic Correction Method Using Maximum Likelihood Motion Estimation Of Ground Scatterers From Radar Data
CN104316924A (en) * 2014-10-15 2015-01-28 南京邮电大学 Autofocus motion compensation method of airborne ultra-high resolution SAR (Synthetic Aperture Radar) back projection image
CN104391297A (en) * 2014-11-17 2015-03-04 南京航空航天大学 Sub-aperture partition PFA (Polar Format Algorithm) radar imaging method
CN104730520A (en) * 2015-03-27 2015-06-24 电子科技大学 Circumference SAR back projection self-focusing method based on subaperture synthesis
CN104793196A (en) * 2015-04-28 2015-07-22 西安电子科技大学 Real-time SAR (synthetic aperture radar) imaging method based on improved range migration algorithm
CN104833973A (en) * 2015-05-08 2015-08-12 电子科技大学 Linear array SAR backward projection self-focusing imaging method based on positive semi-definite programming
CN105842694A (en) * 2016-03-23 2016-08-10 中国电子科技集团公司第三十八研究所 FFBP SAR imaging-based autofocus method
CN106802416A (en) * 2017-02-21 2017-06-06 电子科技大学 A kind of quick factorization rear orientation projection SAR self-focusing methods
US20190089892A1 (en) * 2017-09-21 2019-03-21 Canon Kabushiki Kaisha Image pickup apparatus having function of correcting defocusing and method for controlling the same
CN107748362A (en) * 2017-10-10 2018-03-02 电子科技大学 A kind of quick autohemagglutination focusing imaging methods of linear array SAR based on maximum sharpness
CN108562898A (en) * 2018-04-17 2018-09-21 西安电子科技大学 A kind of front side regards the distance and bearing bidimensional space-variant self-focusing method of SAR
CN109031295A (en) * 2018-07-17 2018-12-18 中国人民解放军国防科技大学 ISAR image registration method based on wave path difference compensation
CN109031301A (en) * 2018-09-26 2018-12-18 云南电网有限责任公司电力科学研究院 Alpine terrain deformation extracting method based on PSInSAR technology
CN109085589A (en) * 2018-10-16 2018-12-25 中国人民解放军国防科技大学 Sparse aperture ISAR imaging phase self-focusing method based on image quality guidance
CN109270529A (en) * 2018-12-07 2019-01-25 电子科技大学 Forward sight array SAR high-resolution imaging method and system based on virtual-antenna
CN109799502A (en) * 2019-01-28 2019-05-24 南京航空航天大学 A kind of bidimensional self-focusing method suitable for filter back-projection algorithm
CN109521425A (en) * 2019-01-30 2019-03-26 云南电网有限责任公司电力科学研究院 A kind of SAR difference chromatography method and device
CN110095775A (en) * 2019-04-29 2019-08-06 西安电子科技大学 The platform SAR fast time-domain imaging method that jolts based on mixed proportion
CN110146857A (en) * 2019-05-17 2019-08-20 西安电子科技大学 One kind is jolted platform SAR three-dimensional motion error estimation
CN110095787A (en) * 2019-05-25 2019-08-06 西安电子科技大学 SAL full aperture imaging method based on MEA and deramp
CN110554385A (en) * 2019-07-02 2019-12-10 中国航空工业集团公司雷华电子技术研究所 Self-focusing imaging method and device for maneuvering trajectory synthetic aperture radar and radar system
CN110806577A (en) * 2019-11-06 2020-02-18 中国科学院电子学研究所 Focusing imaging method and device of synthetic aperture radar, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
OCTAVIO PONCE: "Fully Polarimetric High-Resolution 3-D Imaging With Circular SAR at L-Band" *
孙笑笑: "机载SAR反投影成像及运动补偿研究" *
曾朝阳: "W波段UAV MISAR实时成像运动补偿方法" *
朱晓秀: "双基地角时变下的ISAR稀疏孔径自聚焦成像" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113514827A (en) * 2021-03-03 2021-10-19 南昌大学 Synthetic aperture radar imaging processing method and application in unmanned aerial vehicle cluster mode
CN113514827B (en) * 2021-03-03 2023-09-05 南昌大学 Synthetic aperture radar imaging processing method and application in unmanned aerial vehicle cluster mode
CN114578355A (en) * 2022-03-03 2022-06-03 西安电子科技大学 Rapid time domain imaging method for hypersonic aircraft synthetic aperture radar
CN114578355B (en) * 2022-03-03 2022-10-21 西安电子科技大学 Rapid time domain imaging method for hypersonic aircraft synthetic aperture radar
CN115453530A (en) * 2022-08-11 2022-12-09 南京航空航天大学 Bistatic SAR (synthetic aperture radar) filtering back-projection two-dimensional self-focusing method based on parameterized model
CN115453530B (en) * 2022-08-11 2024-03-29 南京航空航天大学 Double-base SAR filtering back projection two-dimensional self-focusing method based on parameterized model

Also Published As

Publication number Publication date
CN111537999B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN111537999A (en) Robust and efficient decomposition projection automatic focusing method
US5383013A (en) Stereoscopic computer vision system
US9438816B2 (en) Adaptive image acquisition for multiframe reconstruction
JP5489897B2 (en) Stereo distance measuring device and stereo distance measuring method
US20100061642A1 (en) Prediction coefficient operation device and method, image data operation device and method, program, and recording medium
US7551119B1 (en) Flight path-driven mitigation of wavefront curvature effects in SAR images
US20190166308A1 (en) Imaging apparatus and imaging method
US20110188758A1 (en) Image processing device and method, and program therefor
JP2009230537A (en) Image processor, image processing program, image processing method, and electronic equipment
US20080080762A1 (en) Image processing apparatus capable of operating correspondence between base image and reference image, method of controlling that image processing apparatus, and computer-readable medium recording program for controlling that image processing apparatus
US11347133B2 (en) Image capturing apparatus, image processing apparatus, control method, and storage medium
JP2008118555A (en) Image processor, image pickup device, and image processing method
TWI767575B (en) Method, non-transitory machine-readable medium and system for multi-image ground control point determination
CN112419380B (en) Cloud mask-based high-precision registration method for stationary orbit satellite sequence images
CN108460792A (en) A kind of efficient focusing solid matching method based on image segmentation
KR20220137558A (en) Method of pixel-by-pixel registration of an event camera to a frame camera
JP2011146826A (en) Unit and method for processing image, and program
US11967096B2 (en) Methods and apparatuses of depth estimation from focus information
Makarov Two-dimensional autofocus technique based on spatial frequency domain fragmentation
CN110967693B (en) Robust and efficient fast decomposition projection automatic focusing method and system
JP4908291B2 (en) Synthetic aperture radar equipment
KR102584209B1 (en) 3d reconstruction method of integrated image using concave lens array
JP2001298657A (en) Image forming method and image forming device
US20230123646A1 (en) Image processing method and system for correcting colors of an input image representing a scene illuminated by multiple illuminants
JP2017188718A (en) Image processing apparatus, control method therefor, and control program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant