CN105842694A - FFBP SAR imaging-based autofocus method - Google Patents

FFBP SAR imaging-based autofocus method Download PDF

Info

Publication number
CN105842694A
CN105842694A CN201610177551.4A CN201610177551A CN105842694A CN 105842694 A CN105842694 A CN 105842694A CN 201610177551 A CN201610177551 A CN 201610177551A CN 105842694 A CN105842694 A CN 105842694A
Authority
CN
China
Prior art keywords
sub
aperture
image
level
pulse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610177551.4A
Other languages
Chinese (zh)
Other versions
CN105842694B (en
Inventor
钟雪莲
邓海涛
陈仁元
杨然
谈璐璐
马志娟
雍延梅
郝慧军
王金峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 38 Research Institute
Original Assignee
CETC 38 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 38 Research Institute filed Critical CETC 38 Research Institute
Priority to CN201610177551.4A priority Critical patent/CN105842694B/en
Publication of CN105842694A publication Critical patent/CN105842694A/en
Application granted granted Critical
Publication of CN105842694B publication Critical patent/CN105842694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9004SAR image acquisition techniques
    • G01S13/9019Auto-focussing of the SAR signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9004SAR image acquisition techniques
    • G01S13/9017SAR image acquisition techniques with time domain processing of the SAR signals in azimuth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides an FFBP SAR imaging-based autofocus method through which accurately focused SAR images can be obtained in eight steps. The method is based on an FFBP frame, phase gradient information of movement error is extracted by using phase difference information of adjacent sub-apertures of point targets, and movement error is estimated through an integration method and compensated to realize autofocus processing of high-accurate SAR images. Compared with conventional methods, the method is not affected by error order. Movement error of any orders can be estimated, and the robustness of autofocus algorithm is substantially improved. Error information is extracted by using phase difference of adjacent sub-apertures, which avoids repeated calculation of a nested overlapping sub-aperture method. Large-scale optimized searching is not required, computational burden is greatly reduced, and imaging time is shortened.

Description

Self-focusing method based on FFBP SAR imaging
Technical Field
The invention belongs to the field of SAR signal processing, relates to imaging processing of a Synthetic Aperture Radar (SAR), in particular to autofocus processing of an SAR image, and particularly relates to an FFBP SAR imaging-based autofocus method.
Background
SAR imaging algorithms can be basically divided into two categories, namely time domain imaging algorithms and frequency domain imaging algorithms. The frequency domain imaging algorithm is a method for uniformly compressing a range-Doppler domain or a two-dimensional frequency domain by utilizing the characteristic that Doppler frequency characteristics of targets in a scene are the same or similar. Since this method compresses all objects at once, it is very computationally efficient. Common frequency domain imaging methods are Range Doppler (RD) algorithm, cs (chirp scaling) algorithm, wave number domain (ω -k) algorithm, and spectral analysis (SPECAN) algorithm commonly used in ScanSAR. These frequency domain imaging methods were developed primarily for strip SAR. For different working modes, some special imaging algorithms have been proposed, such as Polar Format Algorithm (PFA) for beamforming SAR, which is the earliest used for imaging processing. All frequency domain algorithms require the echo signals to be transformed into the azimuth frequency domain (except for the SPECAN method), and therefore, the algorithm presupposes that the signals are not aliased in the azimuth frequency domain. However, for beamforming, sliding beamforming SAR, etc., the Pulse Repetition Frequency (PRF) of the radar system cannot be higher than the full azimuthal doppler bandwidth, and thus, the azimuthal spectral aliasing is inevitable. The imaging needs to solve the azimuth spectrum aliasing or adopt the azimuth sub-aperture imaging to avoid the spectrum aliasing. The processing is completely feasible under the condition of front side view, but for the condition of large strabism, especially under the conditions of satellite-borne large strabism and ultrahigh resolution, the whole imaging area has serious space-variant phase due to strong coupling between distance and direction, and the traditional frequency domain imaging method is difficult to solve the problem of direction space-variant. The time domain imaging method, such as a Back Projection (BP) algorithm, can well solve the problem of distance-orientation coupling under the condition of large squint imaging, is suitable for image reconstruction under complex imaging geometries such as a large coherent accumulation angle or a nonlinear track, and the focused image has no geometric distortion. However, since the BP algorithm is a point-by-point calculation, the calculation amount is huge when a large scene is imaged with high resolution, and the efficiency is far lower than that of the frequency domain algorithm. In order to reduce the amount of calculation of the BP algorithm, researchers have proposed many fast backprojection algorithms. Among them, the FFBP (Fast factored Back-project) algorithm is one of the most commonly used algorithms.
The FFBP algorithm is a fast algorithm proposed for the redundancy of the calculation amount of the BP algorithm. The fundamental principle of the method is that as the angular domain resolution is in direct proportion to the sub-aperture length, for the sub-aperture divided at the initial stage, the angular domain bandwidth is very small, and according to the Nyquist sampling law, the corresponding angular domain sampling rate can be very low, so that a coarse angular domain resolution image can be obtained by a small number of angular domain sampling points, namely, the data corresponding to each aperture position does not need to be back-projected to each pixel point of the imaging grid. Based on the idea as a design basis, the FFBP algorithm establishes an algorithm structure similar to butterfly operation, and the operation complexity is N2logN, close to the frequency domain algorithm. At the same time, the FFBP algorithm still retains the advantages of conventional BP imaging, namely segmented imaging and the feature of facilitating compensation. Therefore, the FFBP algorithm can replace the traditional BP algorithm to be qualified for large-data-scale imaging occasions.
For the time domain imaging method, the ground scene can be accurately reconstructed as long as the position of the platform is accurate. However, in the actual SAR data acquisition process, the accuracy of the inertial navigation system can only reach 2-10cm at most. For ultra-high resolution SAR imaging of about 0.1m, the position accuracy of a radar phase center is required to reach a millimeter level, which cannot be realized by the current inertial navigation system. The residual motion error caused by the limitation of the precision of the inertial navigation system can generate serious defocusing in the azimuth direction and even cannot be imaged. For a long time, researchers in the SAR field have concentrated on frequency domain imaging algorithms and self-focusing algorithms thereof, and have paid less attention to the time domain self-focusing algorithms, so that the research results in the field are poor. The time domain imaging algorithm has the characteristics of accurate focusing, easy motion compensation and no surface geometric distortion, so that the time domain imaging algorithm has unique advantages in some emerging SAR imaging fields, such as circular track SAR, Ultra Wide Band (UWB) SAR, double-station SAR and the like. Moreover, with the rapid development of parallel computing technology, the problem of the computation load of the time domain imaging algorithm and the rapid algorithm thereof is eliminated for a long time, and SAR researchers are also gradually inclined to adopt the time domain imaging means. In recent years, more and more articles about the field are published, but most of the articles are still only on the simple application level of the algorithm, and the research on the self-focusing processing technology of the time domain algorithm, especially the FFBP algorithm, is still very little.
Most of the self-focusing related to BP algorithm in the literature adopts the idea of optimized focusing to establish a cost function for evaluating the focusing performance of an image and estimate motion error parameters in an optimized solving mode, but the algorithm relates to the optimized solving of high-dimensional parameters, the calculation amount is very large, and the applicability is greatly limited. Jakowatz et al indicate that a conventional frequency domain self-focusing method, such as a Phase Gradient auto-focusing (PGA) algorithm, cannot be directly used in a BP algorithm for imaging in a geospatial coordinate system, because a fourier transform pair relationship cannot be established between an image domain and a distance-compressed Phase history domain. Meanwhile, under the condition of small angular domain, the azimuth angle and the time domain are approximate to the relationship of fourier transform pair, therefore, they propose a solution that the BP imaging result under the distance-azimuth coordinate system is PGA-processed, then the image is converted under the rectangular coordinate system, and the effectiveness of the method is verified by using a semi-simulation method (refer to "c.v. jakowtz, jr.and d.e.wahl, consistency for auto focus of spot light-mode SAR imaging used a beamforming algorithm, proc.of spc, vol.37, pp.73370a-1-73370 a-9."). Lei Zhang et al propose a method of combining multi-Aperture image shift (MAMD) in FFBP imaging to estimate the Doppler modulation frequency of each level of image (refer to "Lei Zhang et al, Integrating auto focus technologies with high resolution speckle SAR imaging, IEEE Geoscience and Remote Sensing Letters,2013,10(6): 1391398."), but this method requires processing all pulses of imaging data at once and saving information of all subimages of the adjacent two-level imaging process, and the memory required by this method is very large for the case of a large number of imaging pulses. Of course, all the intermediate imaging results can be output to the hard disk and read when necessary, but this will increase the time for inputting and outputting data. Moreover, the method controls the estimation accuracy by polynomial order, and the accuracy is better as the order is higher, but the increase of the order can cause the degradation of the estimation performance. Lihalin et al proposed that the FFBP algorithm combines the multi-aperture phase gradient autofocus method to perform imaging processing (refer to "Lihalin et al, application of phase gradient autofocus in FFBP beamforming SAR processing, journal of the university of Western's electronics science and technology (Nature science edition), 2014, 41(3): 26-32."), but this method not only needs to process all pulses of imaging data at the same time, but also needs half of overlapping of two adjacent sub-aperture images, so that the continuity of the phase error of PGA estimation in the azimuth direction can be ensured. This not only doubles the redundant computation, but also adds significantly to the complexity of the algorithm.
Disclosure of Invention
The invention provides a method for carrying out self-focusing processing by using phase difference information of adjacent sub-apertures of a point target, aiming at the defects of the conventional FFBP imaging-based self-focusing algorithm. The method establishes the relationship between the first-order derivatives of adjacent sub-apertures through the phase difference information, estimates the residual motion error information through a one-time integration method and compensates the residual motion error information, thereby avoiding the sub-aperture overlapping and subsequent complex calculation. And the algorithm can divide the azimuth pulse into different pulse blocks, the pulse blocks are sequentially imaged, namely, the first pulse block is imaged under a polar coordinate format firstly and then is converted into a rectangular coordinate system to be subjected to image fusion, the estimation and compensation of the motion error are completely embedded into each stage of operation of FFBP imaging, and the SAR imaging is completed and the self-focusing processing is also completed. After the first pulse block is imaged, the second pulse block is processed, and the processing is carried out sequentially. This has the advantage that intermediate results for all sub-apertures do not need to be stored, but rather only intermediate results for sub-images within the burst block, which saves significant memory. Compared with the existing self-focusing processing of BP imaging, the method does not need large-scale optimization search, completely inherits the framework of FFBP imaging, and has very small increase of the computation amount. In addition, due to the change rule of the azimuth resolution of the FFBP imaging from low to high along with the image recursion process, the FFBP-based self-focusing method is very suitable for processing image defocusing spanning a distance unit.
The invention specifically comprises the following steps:
a self-focusing method based on FFBP SAR imaging is characterized in that: the system comprises eight data processing modules:
and the module is used for initializing the SAR system parameters and the observation scene parameters.
And a module for setting FFBP imaging parameters, interpolation kernels and self-focusing parameters.
And a module for dividing original echo data, reading pulse block data and performing range-direction pulse compression.
And a module for FFBP level 0 imaging of pulse block data.
And a module for FFBP i-th level imaging of the pulse block data.
And a module for judging whether or not additional autofocus processing needs to be executed.
And converting the polar coordinate image into a rectangular coordinate system, and accumulating the polar coordinate image into a final SAR image.
And judging whether all the pulse blocks are processed.
Further, an FFBP SAR imaging-based self-focusing method specifically comprises the following steps:
step 1 (corresponding to the module for initializing the SAR system parameters and the observation scene parameters): and initializing SAR system parameters and observation scene parameters.
And determining an imaging rectangular coordinate system and an origin thereof according to the position information and the beam pointing information output by the inertial navigation system. The initialized SAR system parameters include: radar platform position vector (x)(0),y(0),z(0)) Platform flying speed V, radar working wavelength lambda, radar transmitted chirp signal bandwidth Br and radar transmitted pulse time width TpRadar pulse repetition frequency PRF, echo range sampling frequency FsThe propagation speed C of the electromagnetic wave in the air and a radar echo data matrix. The radar echo data matrix comprises a distance direction fast time sampling point number K and an azimuth direction slow time sampling point number L.
The observed scene parameters include: azimuth and range sampling intervals dx, dy of the scene, the number of sampling points M, N of the azimuth and range, and the plane coordinate position (x) of the scene centerc_scene,yc_scene). That is, the plane coordinates of the (m, n) -th ground unit in the scene may be represented as (x)c_scene+m·dx,yc_scene+ n.dy), wherein-M/2 is not more than M<M/2,-N/2≤n<N/2. The coordinate of each ground pixel in the vertical direction is obtained through an externally input Digital Elevation Model (DEM), or the average reference Elevation h of the imaging scene area is adoptedref
Step 2 (corresponding to the module for setting FFBP imaging parameters, interpolation kernel and self-focusing parameters): and setting FFBP imaging parameters, an interpolation kernel and self-focusing parameters.
FFBP imaging parameters are the number of pulse blocks BlkNum and the number of pulses of each pulse blockLevel LevelNum of the series and decomposition factor n of each level(i). Determining the number of sub-apertures and azimuth beams processed by each stage of imaging according to the FFBP imaging parametersAnd the interpolation kernel is a method adopted by distance direction and azimuth direction interpolation in FFBP imaging, wherein the distance direction interpolation adopts frequency domain zero padding interpolation, and the azimuth direction adopts Knab interpolation.
The autofocus parameters, which are the cycle number of the autofocus processing and the 3 global arrays for storage opened up for each stage of the autofocus processing, are: global array of azimuth position globalPos, global array of phase gradient globalphaseGradient, global array of integral phase globalPhase.
Step 3 (corresponding to a module for dividing original echo data, reading pulse block data and performing range-wise pulse compression): the number of pulse blocks BlkNum and the number of pulse blocks set in step 2The raw echo data is divided. And (4) reading data of one pulse block, performing range pulse compression, and entering the step 4.
Step 4 (corresponding to "module for FFBP level 0 imaging of pulse block data"): the pulse block data is subjected to the 0 th level imaging of FFBP, followed by proceeding to step 5.
Step 5 (corresponding to "module for FFBP i-th level imaging of pulse block data"): the pulse block data is subjected to i-th stage imaging of FFBP.
When all the sub-aperture images in the pulse block are merged into one polar coordinate image, step 6 is entered.
Step 6 (corresponding to "a module that determines whether or not additional autofocus processing needs to be performed"): determining whether additional autofocus processing needs to be performed: if yes, the process proceeds to step 7 after performing an additional auto-focusing step. If not, go to step 7.
Step 7 (corresponding to the module converting the polar coordinate image to the rectangular coordinate system and accumulating the polar coordinate image to the final SAR image): and converting the polar coordinate image obtained in the step 6 into a rectangular coordinate system, and accumulating the rectangular coordinate image obtained by the pulse block into a final SAR image.
Step 8 (corresponding to the module for judging whether all the pulse blocks are processed or not): judging whether the pulse block data divided according to the method in the step 3 is processed or not:
and if the pulse block data is not processed, reading the unprocessed pulse block data, compressing the distance direction pulse, and returning to the step 4.
And if the pulse block data is processed, ending the operation.
Advantageous technical effects
Based on an FFBP framework, the invention provides a method for extracting phase gradient information of a motion error by using phase difference information of adjacent sub-apertures of a point target, estimating the motion error by an integral method, compensating and realizing the self-focusing processing of a high-precision SAR image. Compared with the existing method, the method is not influenced by the error order, can estimate the motion error of any order, and greatly improves the robustness of the self-focusing algorithm. Meanwhile, the method extracts error information by using the phase difference of adjacent sub-apertures, avoids repeated calculation of a nested overlapping sub-aperture method, does not need large-scale optimized search, greatly reduces the calculation amount and accelerates the imaging time. Moreover, the method completely inherits the FFBP imaging framework, and can complete self-focusing without greatly changing the original imaging framework. The pulse blocks are imaged in sequence without storing intermediate images of all the pulse blocks, so that the requirement of an algorithm on the equipment memory is greatly reduced. In addition, different from the existing method, the polar coordinate sub-image is generated by adopting a ground polar coordinate system during FFBP imaging, so that external elevation is introduced during self-focusing, the precision of self-focusing estimation is improved, and the method has greater application potential in SAR self-focusing processing in ultrahigh resolution or severe topographic relief areas.
Drawings
FIG. 1 is a block flow diagram of the present invention.
FIG. 2 is a schematic diagram of a polar coordinate system of the ground for FFBP imaging in the present invention.
Fig. 3 is a simulation parameter of an airborne beamforming SAR system.
Figure 4 is a sinusoidal error with amplitude of 0.05m and period of 10 s.
Fig. 5 is the imaging result under the sinusoidal error shown in fig. 4.
Fig. 6 is a phase gradient of the residual motion error estimated from the first cycle of autofocusing.
Fig. 7 shows the result of integrating the motion error estimated in fig. 6 (with the global linear term removed).
Figure 8 is the imaging results obtained with the method of the invention.
Fig. 9 is an image obtained by performing 32-fold interpolation on fig. 8.
FIG. 10 is a distance-to-point target response obtained using the method of the present invention.
FIG. 11 is an azimuthal point target response obtained using the method of the present invention.
Detailed Description
Referring to fig. 1, a self-focusing method based on FFBP SAR imaging includes the following steps:
step 1: and initializing SAR system parameters and observation scene parameters.
And determining an imaging rectangular coordinate system and an origin thereof according to the position information and the beam pointing information output by the inertial navigation system. The initialized SAR system parameters include: radar platform position vector (x)(0),y(0),z(0)) Platform flying speed V, radar working wavelength lambda and radar transmitted linear frequency modulation signal bandwidth BrTime width T of radar emission pulsepRadar pulse repetition frequency PRF, echo range sampling frequency FsThe propagation speed C of the electromagnetic wave in the air and a radar echo data matrix. The radar echo data matrix comprises a distance direction fast time sampling point number K and an azimuth direction slow time sampling point number L, wherein the K and the L are positive integers. Position vector (x) at radar platform(0),y(0),z(0)) In, x(0)Position of radar platform in horizontal plane transverse axis, y(0)For the radar platform position in the horizontal plane with the longitudinal axis upward, z(0)The radar platform is at the position which is vertically above the ground.
The observed scene parameters include: azimuth and range sampling intervals dx, dy of the scene, the number of sampling points M, N of the azimuth and range, and the plane coordinate position (x) of the scene centerc_scene,yc_scene). That is, the plane coordinates of the (m, n) -th ground unit in the scene may be represented as (x)c_scene+m·dx,yc_scene+ n.dy), wherein-M/2 is not more than M<M/2,-N/2≤n<N/2. The vertical coordinate of each ground pixel is obtained by externally input DEM (digital elevation model), or the average reference elevation h of an imaging scene area is adoptedref
Step 2: and setting FFBP imaging parameters, an interpolation kernel and self-focusing parameters.
FFBP imaging parameters are the number of pulse blocks BlkNum and the number of pulses of each pulse blockLevel LevelNum of the series and decomposition factor n of each level(i). Determining the number of sub-apertures and azimuth beams processed by each stage of imaging according to the FFBP imaging parameters
And the interpolation kernel is a method adopted by distance direction and azimuth direction interpolation in FFBP imaging, wherein the distance direction interpolation adopts frequency domain zero padding interpolation, and the azimuth direction adopts Knab interpolation.
The autofocus parameters, which are the cycle number of the autofocus processing and the 3 global arrays for storage opened up for each stage of the autofocus processing, are: global array of azimuth position globalPos, global array of phase gradient globalphaseGradient, global array of integral phase globalPhase.
And step 3: the number of pulse blocks BlkNum and the number of pulse blocks set in step 2The raw echo data is divided. And (4) reading data of one pulse block, performing range pulse compression, and entering the step 4.
And 4, step 4: the pulse block data is subjected to the 0 th level imaging of FFBP, followed by proceeding to step 5.
And 5: the pulse block data is subjected to i-th stage imaging of FFBP.
When all the sub-aperture images in the pulse block are merged into one polar coordinate image, step 6 is entered.
Step 6: determining whether additional autofocus processing needs to be performed: if so, an additional autofocus step is performed. If not, go to step 7.
And 7: and converting the polar coordinate image obtained in the step 5 or the step 6 into a rectangular coordinate system, and accumulating the rectangular coordinate image obtained by the pulse block into the final SAR image.
And 8: judging whether the pulse block data divided according to the method in the step 3 is processed or not:
and if the pulse block data is not processed, reading the unprocessed pulse block data, compressing the distance direction pulse, and returning to the step 4.
And if the pulse block data is processed, ending the operation.
Further, in step 1, when the resolution of the SAR image is on the order of meter or lower, or (where it cannot be used, i.e., the following is different from the preceding one; also, the "instant" of the next paragraph cannot be used) the imaging area is flat, the coordinates in the vertical direction of the ground pixels are taken as the average reference elevation h of the imaging scene arearefI.e., the elevation of all ground units is considered as the average reference elevation. When the resolution is high, especially when the SAR imageThe resolution is better than 0.2m or not high, but when the terrain of the imaging area is in severe fluctuation, the coordinates of the ground pixels in the vertical direction are provided by the DEM.
Referring to fig. 1, further, in step 4, the specific method for performing FFBP 0 th level imaging on pulse block data is as follows:
step 4.1, establishing a ground polar coordinate system by using the cosine of the polar radius and the azimuth angle, and setting the parameters of the coordinate system:
obtaining the plane coordinates of four angular points of the scene according to the plane coordinates of the center of the imaging scene, the sampling point number M, N of the azimuth direction and the ground distance direction and the sampling interval dx and dy thereof, and then calculating the maximum ground distance of the scene on each pulse position by utilizing the projection of the carrier position output by the inertial navigation system on the groundAnd minimum ground distance
According to the functional formulaAnd obtaining the number of sampling points of the distance dimension of the polar coordinate system at the pulse position, wherein int represents rounding, bin _ R is the sampling interval of the distance direction of the original echo signal, and the distance sampling interval of the polar coordinate system is the same as that of the original echo signal.
All of which are pulsedAll set to the same value as the maximum within the pulse blockThe value is obtained.
Obtaining the maximum cosine value MaxCos of the included angle theta between the scene and the flight direction of the carrier at each pulse position(0)And minimum cosine value MinCos(0)And according to the number of azimuth beamsObtaining the cosine value of the included angle at each beam position:
wherein,
according to the three-dimensional coordinates of the four angular points of the scene and the position of the carrier, calculating the space distances between the carrier and the four angular points of the scene, thereby obtaining the maximum slope distance of the scene at each pulse positionAnd minimum skew distanceAnd when the user looks from the front side or slightly obliquely, the minimum oblique distance is the closest oblique distance from the scene to the whole flight path.
Step 4.2 maximum slope distance for all pulse positions in the pulse blockAnd minimum skew distanceObtaining an extreme value to obtain the maximum slope distance of the scene on the pulse blockAnd minimum skew distanceAnd accordingly, corresponding signal segments are intercepted from the original echo signals and subjected to range up-sampling.
Step 4.3 according to the calculation of step 4.1, the dimension of the scene at each pulse position under the polar coordinate system isThen the cosine of the included angle of the azimuth beam is usedAnd the nearest distanceObtaining plane rectangular coordinate values on polar coordinate sampling points (m, n): wherein,is a polar radius of polar coordinates, and is the sine of the azimuth beam angle.
Passing through plane coordinates of elevation values on each sampling pointObtained in an external DEM file. If there is no DEM file, the available reference elevation hrefInstead of the elevation values.
Obtaining coordinates on polar coordinate sampling pointsThen, based on the carrier coordinate (x) at each pulse position(0),y(0),z(0)) Calculating the distance between the twoAnd obtaining the signal at the sampling point on the echo data of the pulse according to the distance, thereby obtaining a polar coordinate image at the pulse position.
Step 4.4 repeat steps 4.1-4.3 to complete the assignment of the polar coordinates of the scene at each pulse position to obtainAnPolar coordinate image of the dimension.
Referring to fig. 1, further, in step 5, the specific method for performing i-th level imaging of FFBP on pulse block data is as follows:
step 5.1 decomposition factor n according to level i(i)Updating the sub-aperture center position coordinate (x) of the ith level imaging(i),y(i),z(i)) Simultaneously updating the number of azimuth beamsIn common withA sub-aperture, wherein,is the sub-aperture number of the i-1 th order. Then calculating the maximum ground distance of each sub-aperture sceneMinimum ground distanceNumber of sampling points in distance directionAnd angle of each beamCosine value
Step 5.2, the image obtained from the i-1 level is subjected to distance up-sampling, and the dimension of the image is
Step 5.3 the ith sub-aperture of the ith stage is defined by n of the (i-1) th stage(i)The sub-aperture images are fused to obtain the k & n corresponding to the (i-1) th level(i)+j(j=0,1,…,n(i)-1) sub-imagesFirst to each otherIs resampled to obtainIt represents the k.n of the i-1 th stage(i)The + j sub-aperture images are resampled to the image in the ith level kth sub-aperture coordinate system. .
Step 5.4 judges whether the ith stage needs to carry out autofocus processing. If necessary, an autofocus step 1 to an autofocus step 6 (see below for details) are performed. Otherwise, accumulating the results after resampling, and assigning to the ith-level kth sub-aperture image, namelyWherein,representing the ith level kth sub-aperture image.
Step 5.5, repeating the step 5.1-5.4, and finishing the polar coordinate imaging of all the sub-apertures of the ith level to obtainWeb with two or more websPolar coordinate image of the dimension.
Further, in step 5.3, the resampling process specifically includes the following steps: calculating the rectangular coordinate of each sampling point under the ith-level kth sub-aperture polar coordinate systemThereby calculating the k & n of each sampling point and the i-1 level(i)+ j straight-line distances between the centers of the sub-aperturesAnd its projected distance on the groundAnd cosine of azimuthMeanwhile, the linear distance between each sampling point and the central position of the ith-level kth sub-aperture can be calculatedAccording toHas a value ofDetermining a distance cell on which to baseAnd obtaining a resampled signal on each sampling point of the ith-level kth sub-aperture by utilizing Knab interpolation. Finally, the resampled signal is multipliedTo compensate for phase, where λ is the radar manThe wavelength of light.
Referring to fig. 1, further, in step 5.4, the autofocus algorithm is embodied as follows:
self-focusing step 1: and (3) processing all the sub-aperture images needing to be fused in the sub-aperture classification:
step 1.1 of self-focusing: if the fused sub-aperture image is the first global sub-aperture of the i-1 level image, i.e. k is 0 and j is 0, then the global azimuth position globalPos corresponding to the sub-aperture is calculated(i)And the global phase gradient corresponding to the subaperture is globalphaseGradient(i)All zeroes while saving the sub-aperture image in the variable lastsubalg(i)In (1). The autofocus step 1 is then performed again.
Self-focusing step 1.2: if the fused sub-aperture image is the first sub-image for synthesizing the ith sub-aperture image, but not the global first sub-aperture, i.e. the (i-1) th order k.n(i)Sub-aperture, where k ≠ 0, j ═ 0, resampling the sub-aperture to the aperture center of its previous sub-aperture, i.e. to the ith-1 st level k · n(i)Aperture center of 1 sub-aperture as current sub-aperture image ThisSubImg(i)Then, the autofocus step 2 to the autofocus step 4 are performed.
Self-focusing step 1.3: if the fused sub-aperture image is the (i-1) th order k.n(i)+ j sub-aperture images, i.e. k ≠ 0, j ≠ 0, resampling the sub-aperture image to the aperture center of its previous sub-aperture as current sub-aperture image ThisSubImg(i)Then, the autofocus step 2 to the autofocus step 4 are performed. Self-focusing step 2: thissubmig with currently processed sub-aperture images(i)And saved in LastSubImg(i)Estimate the average constant change of the phase gradients of two sub-aperturesThen the autofocus step 3 is entered.
Self-focusing step 3: uniformly assigning the phase gradients of all aperture positions corresponding to the i-1 level sub-aperture currently processed asAnd recording the global phase gradient array globalphaseGradient(i)In (1), wherein,the phase gradient estimated for one sub-aperture at the i-1 th level. Global azimuth position array globalPos according to currently processed sub-aperture center position and aperture length(i)Performing an assignment, globalphaseGradient(i)Phase and globalPos in (1)(i)One-to-one correspondence of the positions in (1). Using the current ThisSubImg(i)Updating LastSubImg(i)While updating the corresponding parameters, and then proceed to the autofocus step 4.
Self-focusing step 4: repeating the self-focusing step 1 to the self-focusing step 3 until all the sub-images needing to be fused of the ith-level kth sub-aperture are processed, accumulating all the resampled sub-images, and assigning to the ith-level kth sub-aperture image, namely
Self-focusing step 5: after the fusion of the ith-level k-th sub-aperture image is completed, the fusion is carried out in a globalphase gradient(i)Integrating the phase gradient in the subaperture interval and storing the result in globalPhase(i)For compensation of the aperture. globalPhase(i)Also with globalPos(i)One-to-one correspondence of the positions in (1). Self-focusing step 6: using globalPhase(i)And carrying out error compensation on the ith-level kth sub-aperture to obtain a compensated sub-aperture image.
Referring to fig. 1, further, in the autofocus algorithm in step 5.4, thissubmig, the sub-aperture image currently processed, is utilized(i)And saved in LastSubImg(i)In (1)Estimating the average constant change of two sub-aperture phase gradients of the last adjacent sub-aperture imageThe specific method comprises the following steps:
autofocus step 2.1 at ThissSubImg(i)And LastSubImg(i)Finding point targets in the image, wherein the selected point targets must show the characteristics of the point targets in the two sub-aperture images, and the number of the selected point targets is assumed to beThe pixel coordinates of the point target in two adjacent subaperture images are bothWherein,
autofocus step 2.2 at ThissSubImg(i)And LastSubImg(i)In the image, respectivelyCutting N out for centera×NrA complex image block of size, wherein Na、NrThe number of points of the azimuth direction and the distance direction is respectively, the value of the points is an integral power of 2, and two complex sub-image blocks are obtained:and
self-focusing step 2.3 using frequency domain zero-filling methodAndperforming two-dimensional interpolation to obtain the interpolated resultAnd
step 2.4 of autofocusingPixel position (m) where the medium search target peak is locatedmaxp,nmaxp) At the position of measurementAndis a phase difference ofWherein arg denotes the phase, athis、alastAre respectively shown inAndin the image (m)maxp,nmaxp) The complex value of (a).
Self-focusing step 2.5 the self-focusing step 2.2-2.4 are repeated for all point targets to obtain the phase difference of all target positions, and the average phase difference is obtained by summationThen, the average constant of the two sub-aperture phase gradients changes by an amountWherein,Lsubis the number of azimuth positions of the i-1 th sub-aperture.
Referring to fig. 1, further, in the self-focusing algorithm in step 5.4, the specific steps of compensating the error of the ith-stage kth sub-aperture and obtaining a compensated sub-aperture image include:
auto-focusing step 6.1: and performing two-dimensional Fourier transform on the ith-level kth sub-aperture.
Self-focusing step 6.2: from globalPhase according to the aperture position of the sub-aperture(i)Middle intercept Phase at corresponding position(i)For compensation, while compensating for envelope offset, the compensation is formulated as:
FIc k ( i ) = FI k ( i ) &CenterDot; exp { - j ( 1 + &lambda;f r C s i n &beta; ) Phase ( i ) } ,
wherein,is thatThe two-dimensional frequency spectrum of (a),is the compensated two-dimensional spectrum, β is the lower viewing angle of the sub-aperture, frIs the range frequency.
And 6, self-focusing.3: for compensated two-dimensional frequency spectrumAnd performing two-dimensional inverse Fourier transform to complete the compensation of the ith-level kth sub-aperture.
Further, in step 6, the polar coordinate image finally obtained for each pulse block is subjected to an additional autofocus process, in which two adjacent pulse blocks are fused into a higher resolution image, and then an error is estimated and compensated for. The reason for this is that the long synthetic aperture helps to estimate the lower frequency error.
The specific steps of the additional auto-focus 0 th cycle are as follows:
step 6.1(0) storing the polar image obtained from the first pulse block in a variableIn the method, a global azimuth position corresponding to the sub-aperture is calculatedAnd the corresponding global phase gradient of the sub-apertureAll are set to zero.
Step 6.2(0) resampling the polar coordinate image obtained by the next pulse block to the aperture center of the previous pulse block, and storing the aperture center in a variableIn (1).
Step 6.3(0) Using the currently processed sub-aperture imageAnd is stored inLast adjacent subaperture in (1)The average phase gradient of the aperture is estimated by the image and recorded in the global phase gradient arrayIn (1). Meanwhile, the global azimuth position array is processed according to the currently processed sub-aperture center position and the aperture lengthAnd carrying out assignment. Integrating the phase gradient in the subaperture interval and storing the integrated phase gradient inIn (1). By usingError compensation is performed on the pulse block image, and then the current pulse block image is processedIs assigned toThen step 6.4(0) is performed.
And 6.4(0) judging whether the 0 th cycle is the last cycle, if so, converting the sub-aperture image obtained by the pulse block to a rectangular coordinate system, and accumulating the sub-aperture image to a final image. If not, repeating the step 6.2(0) to the step 6.4(0) until the processing of all the pulse blocks of the 0 th cycle is completed.
The specific steps of the additional autofocus ith cycle (i ≠ 0) are as follows:
step 6.1(i) judges whether the ith-1 th sub-aperture image is the first sub-aperture for fusing the ith-level kth sub-aperture image, if so, calculates the dimensionality of the fused imageCosine of beam angleAnd the fused aperture center position, then step 6.2(i) is performed. If not, directly executing step 6.2 (i).
Step 6.2(i) resampling the i-1 st-level jth sub-aperture image to a sub-aperture coordinate system of the ith-level kth sub-aperture, and accumulating the resampled image to the ith-level kth sub-aperture image.
Step 6.3(i) determines whether the j-th sub-aperture at the i-1 th level is the last sub-aperture for fusing the k-th sub-aperture at the i-th level, if so, step 6.4(i) is executed, and if not, step 6.1(i) is executed by making j equal to j + 1.
Step 6.4(i) judging whether the ith-level kth sub-aperture image is the ith-level global first sub-aperture image, if so, storing the image in a variableIn the method, a global azimuth position corresponding to the sub-aperture is calculatedAnd the corresponding global phase gradient of the sub-apertureAll zeroes are then performed, step 6.5 (i). If not, saving the sub-aperture image currently processed inIn, then withEstimating the mean constant variation of the phase gradient of the subaperture image of (1)Thereby obtainingValue, integral toCompensating the error to obtain the compensated ith-level kth sub-aperture image, and converting the current image into a plurality of sub-aperture imagesIs assigned toThen step 6.5(i) is performed.
And 6.5(i) judging whether the ith cycle is the last cycle, if so, converting the ith-level kth sub-aperture image to a rectangular coordinate system and accumulating the ith-level kth sub-aperture image to a final image, and if not, storing the image for subsequent processing. Then, let k be k +1 and j be j +1, and repeat steps 6.1(i) to 6.4(i) until the processing of all sub apertures of the i-1 th stage is completed.
Examples
The method is mainly verified by adopting a simulation experiment method, and all steps and conclusions are verified to be correct on the Visual Studio 2010. The method can be used for self-focusing processing of stripe, bunching and sliding bunching SAR FFBP imaging, wherein the bunching SAR with ultrahigh resolution (better than 0.1m) is taken as an example for simulation, and FIG. 1 is a flow chart of the method. The specific implementation steps are as follows:
step 1: and setting SAR system parameters and observation scene parameters. Fig. 2 is a schematic diagram of a polar coordinate system on the ground adopted by the FFBP algorithm, and fig. 3 is a simulation parameter of the beamforming SAR system. 5 point targets are placed near the center of the scene at an elevation of 0 m. The addition of sinusoidal errors of amplitude 0.005m and period 10s in the y and z directions of the stage, the errors in the view line of the scene center are shown in fig. 4, and the results obtained without autofocusing are shown in fig. 5.
Step 2: determination of FFBP imaging parameters and auto-focusing parameters and selection of interpolation kernel. The FFBP algorithm processes 12 complete pulse blocks of 2048 pulses per pulse block. And 7 stages of operation are needed, the decomposition factors of the previous stages are all 4, the decomposition factor of the last stage is 2, and finally the polar coordinate image is converted into a rectangular coordinate system. For accurate estimation of residual motion error, the range direction uses 16-fold FFT interpolation, and the azimuth direction uses 16-point Knab interpolation. The number of azimuth beams of each stage is 8, 16, 64, 256, 512, 2048 and 4096. The number of self-focusing cycles is 1+2, namely, the self-focusing processing is carried out at the 6 th stage of FFBP imaging, and after the imaging of the pulse blocks is completed, the self-focusing processing is carried out for 2 times by utilizing an additional self-focusing method on two adjacent pulse blocks.
And step 3: reading the 0 th to 2047 th pulses and performing range-wise pulse compression
And 4, step 4: FFBP 0 level imaging, and calculating to obtain 2048 polar coordinate images with 8 multiplied by 4316 dimensions
And 5: FFBP level 1 imaging obtains 512 polar coordinate images of 16 x 4316 dimensions, level 2 imaging obtains 128 polar coordinate images of 64 x 4316 dimensions, level 3 imaging obtains 32 polar coordinate images of 256 x 4316 dimensions, level 4 imaging obtains 8 polar coordinate images of 512 x 4316 dimensions, level 5 imaging obtains 2 polar coordinate images of 2048 x 4316 dimensions, level 6 self-focusing processing is carried out, 5 point targets are selected for phase difference estimation in each adjacent sub-aperture, the size of the intercepted target image block is 64 pixels (azimuth direction) × 8 pixels (distance direction), and 1 polar coordinate image of 4096 x 4314 dimensions is obtained. Fig. 6 and 7 show the full aperture phase gradient obtained by the FFBP imaging stage 6 autofocus processing and the result after integration thereof.
And 6, obtaining 1 4096 × 4314-dimensional polar coordinate image after the imaging of the first pulse block is finished, and storing the image to LastSubImg(i)Among the variables, the relevant variables are initialized at the same time. Otherwise, LastSubImg is used(i)Image of (1) and thissubmig of the current process(i)The image estimates the constant change of the phase gradient between the two pulse blocks according to the self-focusing processing method to obtain the phase gradient on the two apertures, then obtains the final phase error through one-time integration, and then obtains the phase error after fusionCompensation is performed on the image.
And 7: and (4) converting the polar coordinate image obtained in the step (6) into a rectangular coordinate system, and accumulating the rectangular coordinate image obtained by the pulse block into the final SAR image.
And 8: and (5) repeating the step 3 to the step 7 until the processing of all the pulse blocks is finished.
Fig. 8 shows the point target compression result after the completion of the self-focusing, fig. 9 is the image after 32 times of up-sampling, and fig. 10 and fig. 11 show the distance direction and azimuth direction point target responses obtained by the method of the present invention. It can be seen that the compression quality of the point target is greatly improved after the point target is processed by the method. At the same time, it can also be seen that this method does not require that the error be confined to a single range bin, but that it still achieves a satisfactory focusing effect in the case where the error spans multiple range bins. In addition, the method does not need to process all the imaged pulses at one time, but one pulse block is imaged in sequence and processed in a self-focusing manner, and under the condition of large pulse number, the requirement of an algorithm on a memory is greatly reduced.

Claims (10)

1. A self-focusing method based on FFBP SAR imaging is characterized in that: the system comprises eight data processing modules:
a module for initializing SAR system parameters and observation scene parameters;
a module for setting FFBP imaging parameters, interpolation kernels and self-focusing parameters;
a module for dividing original echo data, reading pulse block data, and performing range-direction pulse compression;
a module for FFBP 0 level imaging of pulse block data;
a module for FFBP i-th level imaging of pulse block data;
a module for judging whether additional autofocus processing is required;
a module for converting the polar coordinate image into a rectangular coordinate system and accumulating the polar coordinate image into a final SAR image;
and judging whether all the pulse blocks are processed.
2. A self-focusing method based on FFBP SAR imaging is characterized in that: the eight modules are processed according to the following steps:
step 1: initializing SAR system parameters and observation scene parameters;
determining an imaging rectangular coordinate system and an origin thereof according to position information and beam pointing information output by the inertial navigation system; the initialized SAR system parameters include: radar platform position vector (x)(0),y(0),z(0)) Platform flying speed V, radar working wavelength lambda and radar transmitted linear frequency modulation signal bandwidth BrTime width T of radar emission pulsepRadar pulse repetition frequency PRF, echo range sampling frequency FsPropagation speed C of electromagnetic waves in the air and a radar echo data matrix; the radar echo data matrix comprises a distance direction fast time sampling point number K and an azimuth direction slow time sampling point number L;
the observed scene parameters include: azimuth and range sampling intervals dx, dy of the scene, the number of sampling points M, N of the azimuth and range, and the plane coordinate position (x) of the scene centerc_scene,yc_scene) (ii) a That is, the plane coordinates of the (m, n) -th ground unit in the scene may be represented as (x)c_scene+m·dx,yc_scene+ n.dy), wherein-M/2 is not more than M<M/2,-N/2≤n<N/2; the vertical coordinate of each ground pixel is obtained by externally input DEM (digital elevation model), or the average reference elevation h of an imaging scene area is adoptedref
Step 2: setting FFBP imaging parameters, interpolation kernels and self-focusing parameters;
FFBP imaging parameters are the number of pulse blocks BlkNum and the number of pulses of each pulse blockLevel LevelNum of the series and decomposition factor n of each level(i)(ii) a Determining the number of sub-apertures and azimuth beams processed by each stage of imaging according to the FFBP imaging parameters
The interpolation kernel is a method adopted by distance direction and azimuth direction interpolation in FFBP imaging, wherein the distance direction interpolation adopts frequency domain zero padding interpolation, and the azimuth direction adopts Knab interpolation;
the autofocus parameters, which are the cycle number of the autofocus processing and the 3 global arrays for storage opened up for each stage of the autofocus processing, are: global array globalPos of azimuth position, global array globalphaseGradient of phase gradient, global array globalphasePhase of integral phase;
and step 3: the number of pulse blocks BlkNum and the number of pulse blocks set in step 2Dividing original echo data; reading a pulse block data, performing range pulse compression, and entering the step 4;
and 4, step 4: performing FFBP level 0 imaging on the pulse block data, and then entering step 5;
and 5: performing i-th-level imaging of FFBP on the pulse block data;
when all the sub-aperture images in the pulse block are fused into a polar coordinate image, entering step 6;
step 6: determining whether additional autofocus processing needs to be performed: if yes, executing an additional self-focusing step and then entering a step 7; if not, entering step 7;
and 7: converting the polar coordinate image obtained in the step 6 into a rectangular coordinate system, and accumulating the rectangular coordinate image obtained by the pulse block into a final SAR image;
and 8: judging whether the pulse block data divided according to the method in the step 3 is processed or not:
if the pulse block data is not processed, reading the next unprocessed pulse block data, compressing the distance direction pulse, and returning to the step 4;
and if the pulse block data is processed, ending the operation.
3. The FFBP SAR imaging-based autofocus method of claim 2, wherein:
in step 1, when the resolution of the SAR image is meter level or lower, the coordinate of the ground pixel in the vertical direction adopts the average reference elevation h of the imaging scene arearefI.e. the elevation of all ground units is considered as the average reference elevation;
when the resolution of the SAR image is better than 0.2m, the coordinate of the ground pixel in the vertical direction is provided by the DEM.
4. The FFBP SAR imaging-based autofocus method of claim 2, wherein:
in step 4, the specific method for performing FFBP 0-level imaging on pulse block data is as follows:
step 4.1, establishing a ground polar coordinate system by using the cosine of the polar radius and the azimuth angle, and setting the parameters of the coordinate system:
obtaining the plane coordinates of four angular points of the scene according to the plane coordinates of the center of the imaging scene, the sampling point number M, N of the azimuth direction and the ground distance direction and the sampling interval dx and dy thereof, and then calculating the maximum ground distance of the scene on each pulse position by utilizing the projection of the carrier position output by the inertial navigation system on the groundAnd minimum ground distance
According to the functional formulaTo find the pulseSampling point number of a distance dimension of a polar coordinate system on the impact position, wherein int represents rounding, bin _ R is a sampling interval of a distance direction of an original echo signal, and the distance sampling interval of the polar coordinate system is the same as that of the original echo signal;
all of which are pulsedAll set to the same value as the maximum within the pulse blockA value;
obtaining the maximum cosine value MaxCos of the included angle theta between the scene and the flight direction of the carrier at each pulse position(0)And minimum cosine value MinCos(0)And according to the number of azimuth beamsObtaining the cosine value of the included angle at each beam position:
CosTheta m ( 0 ) = MinCos ( 0 ) + m &CenterDot; ( MaxCos ( 0 ) - MinCos ( 0 ) ) / ( M B e a m ( 0 ) - 1 ) ,
wherein,
according to the three-dimensional coordinates of the four corner points of the scene and the position of the carrier, the carrier and the scene four are calculatedThe space distance of each angular point is obtained, thereby obtaining the maximum slope distance of the scene at each pulse positionAnd minimum skew distanceWhen the aircraft is in front side view or small oblique view, the minimum oblique distance is the closest oblique distance from the scene to the whole flight track;
step 4.2 maximum slope distance for all pulse positions in the pulse blockAnd minimum skew distanceObtaining an extreme value to obtain the maximum slope distance of the scene on the pulse blockAnd minimum skew distanceIntercepting corresponding signal segments on the original echo signals to perform distance direction up-sampling;
step 4.3 according to the calculation of step 4.1, the dimension of the scene at each pulse position under the polar coordinate system isThen the cosine of the included angle of the azimuth beam is usedAnd the nearest distanceObtaining plane rectangular coordinate values on polar coordinate sampling points (m, n): wherein,is a polar radius of polar coordinates, and is the sine value of the included angle of the azimuth wave beams;
passing through plane coordinates of elevation values on each sampling pointObtaining in an external DEM file; if there is no DEM file, the available reference elevation hrefReplacing the elevation values;
obtaining coordinates on polar coordinate sampling pointsThen, based on the carrier coordinate (x) at each pulse position(0),y(0),z(0)) Calculating the distance between the twoObtaining a signal at the sampling point on the echo data of the pulse according to the distance, thereby obtaining a polar coordinate image at the pulse position;
step 4.4 repeat steps 4.1-4.3 to complete the assignment of the polar coordinates of the scene at each pulse position to obtainAnPolar coordinate image of the dimension.
5. The FFBP SAR imaging-based autofocus method of claim 2, wherein:
in step 5, the specific method for performing FFBP i-level imaging on pulse block data comprises:
step 5.1 decomposition factor n according to level i(i)Updating the sub-aperture center position coordinate (x) of the ith level imaging(i),y(i),z(i)) Simultaneously updating the number of azimuth beamsIn common withA sub-aperture, wherein,is the sub-aperture number of the i-1 th order; then calculating the maximum ground distance of each sub-aperture sceneMinimum ground distanceNumber of sampling points in distance directionAnd cosine value of each beam angle
Step 5.2, the image obtained from the i-1 level is subjected to distance up-sampling, and the dimension of the image is
Step 5.3 the ith sub-aperture of the ith stage is defined by n of the (i-1) th stage(i)The sub-aperture images are fused to obtain the k & n corresponding to the (i-1) th level(i)+j(j=0,1,…,n(i)-1) sub-imagesFirst to each otherIs resampled to obtainIt represents the k.n of the i-1 th stage(i)The + j sub-aperture images are resampled to the image under the ith-level kth sub-aperture coordinate system;
step 5.4 judges whether the ith stage needs to carry out autofocus processing. If necessary, performing an auto-focusing step 1 to an auto-focusing step 6 (see the following for details); otherwise, accumulating the results after resampling, and assigning to the ith-level kth sub-aperture image, namelyWherein,representing the ith level kth sub-aperture image.
Step 5.5, repeating the step 5.1-5.4, and finishing the polar coordinate imaging of all the sub-apertures of the ith level to obtainWeb with two or more websPolar coordinate image of the dimension.
6. The FFBP SAR imaging-based autofocus method of claim 5, wherein:
in step 5.3, the resampling process is specifically as follows: calculating the rectangular coordinate of each sampling point under the ith-level kth sub-aperture polar coordinate systemThereby calculating the k & n of each sampling point and the i-1 level(i)+ j straight-line distances between the centers of the sub-aperturesAnd its projected distance on the groundAnd cosine of azimuthMeanwhile, the linear distance between each sampling point and the central position of the ith-level kth sub-aperture can be calculatedAccording toHas a value ofDetermining a distance cell on which to baseObtaining a resampled signal on each sampling point of the ith-level kth sub-aperture by utilizing Knab interpolation; finally, the resampled signal is multipliedTo compensate for the phase, where λ is the wavelength at which the radar operates.
7. The FFBP SAR imaging-based autofocus method of claim 5, wherein:
in step 5.4, the autofocus algorithm is specified as follows:
self-focusing step 1: and (3) processing all the sub-aperture images needing to be fused in the sub-aperture classification:
step 1.1 of self-focusing: if the fused sub-aperture image is the first global sub-aperture of the i-1 level image, i.e. k is 0 and j is 0, then the global azimuth position globalPos corresponding to the sub-aperture is calculated(i)And the global phase gradient corresponding to the subaperture is globalphaseGradient(i)All zeroes while saving the sub-aperture image in the variable lastsubalg(i)Performing the following steps; then, the self-focusing step 1 is executed again;
self-focusing step 1.2: if the fused sub-aperture image is the first sub-image for synthesizing the ith sub-aperture image, but not the global first sub-aperture, i.e. the (i-1) th order k.n(i)Sub-aperture, where k ≠ 0, j ═ 0, resampling the sub-aperture to the aperture center of its previous sub-aperture, i.e. to the ith-1 st level k · n(i)Aperture center of 1 sub-aperture as current sub-aperture image ThisSubImg(i)Then, performing a self-focusing step 2 to a self-focusing step 4;
self-focusing step 1.3: if the fused sub-aperture image is the (i-1) th order k.n(i)+ j sub-aperture images, i.e. k ≠ 0, j ≠ 0, resampling the sub-aperture image to the aperture center of its previous sub-aperture as current sub-aperture image ThisSubImg(i)Then, performing a self-focusing step 2 to a self-focusing step 4;
self-focusing step 2: thissubmig with currently processed sub-aperture images(i)And saved in LastSubImg(i)Estimate the average constant change of the phase gradients of two sub-aperturesThen, entering an auto-focusing step 3;
self-focusing step 3: will be presentThe phase gradients of all the aperture positions corresponding to the processed i-1 level sub-apertures are uniformly assigned to beAnd recording the global phase gradient array globalphaseGradient(i)In (1), wherein,the phase gradient of one sub-aperture on the i-1 level; global azimuth position array globalPos according to currently processed sub-aperture center position and aperture length(i)Performing an assignment, globalphaseGradient(i)Phase and globalPos in (1)(i)The positions in (1) correspond to one another; using the current ThisSubImg(i)Updating LastSubImg(i)Updating corresponding parameters at the same time, and then entering into an auto-focusing step 4;
self-focusing step 4: repeating the self-focusing step 1 to the self-focusing step 3 until all the sub-images needing to be fused of the ith-level kth sub-aperture are processed, accumulating all the resampled sub-images, and assigning to the ith-level kth sub-aperture image, namely
Self-focusing step 5: after the fusion of the ith-level k-th sub-aperture image is completed, the fusion is carried out in a globalphase gradient(i)Integrating the phase gradient in the subaperture interval and storing the result in globalPhase(i)For compensation of the aperture; globalPhase(i)Also with globalPos(i)The positions in (1) correspond to one another;
self-focusing step 6: using globalPhase(i)And carrying out error compensation on the ith-level kth sub-aperture to obtain a compensated sub-aperture image.
8. The FFBP SAR imaging-based autofocus method of claim 5, wherein:
self-focusing in step 5.4In the algorithm, the currently processed sub-aperture image ThisSubImg is utilized(i)And saved in LastSubImg(i)Estimate the average constant change of the phase gradients of two sub-aperturesThe specific method comprises the following steps:
autofocus step 2.1 at ThissSubImg(i)And LastSubImg(i)Finding point targets in the image, wherein the selected point targets must show the characteristics of the point targets in the two sub-aperture images, and the number of the selected point targets is assumed to beThe pixel coordinates of the point target in two adjacent subaperture images are bothWherein,
autofocus step 2.2 at ThissSubImg(i)And LastSubImg(i)In the image, respectivelyCutting N out for centera×NrA complex image block of size, wherein Na、NrThe number of points of the azimuth direction and the distance direction is respectively, the value of the points is an integral power of 2, and two complex sub-image blocks are obtained:and
self-focusing step 2.3 using frequency domain zero-filling methodAndperforming two-dimensional interpolation to obtain the interpolated resultAnd
step 2.4 of autofocusingPixel position (m) where the medium search target peak is locatedmaxp,nmaxp) At the position of measurementAndis a phase difference ofWherein arg denotes the phase, athis、alastAre respectively shown inAndin the image (m)maxp,nmaxp) The complex value of (a);
self-focusing step 2.5 the self-focusing step 2.2-2.4 are repeated for all point targets to obtain the phase difference of all target positions, and the average phase difference is obtained by summationThen, the average constant of the two sub-aperture phase gradients changes by an amountWherein L issubIs the number of azimuth positions of the i-1 th sub-aperture.
9. The FFBP SAR imaging-based autofocus method of claim 5, wherein:
in the self-focusing algorithm in step 5.4, the specific steps of compensating the error of the ith-stage kth sub-aperture and obtaining a compensated sub-aperture image are as follows:
auto-focusing step 6.1: performing two-dimensional Fourier transform on the ith-level kth sub-aperture;
self-focusing step 6.2: from globalPhase according to the aperture position of the sub-aperture(i)Middle intercept Phase at corresponding position(i)For compensation, while compensating for envelope offset, the compensation is formulated as:
FIc k ( i ) = FI k ( i ) &CenterDot; exp { - j ( 1 + &lambda;f r C s i n &beta; ) Phase ( i ) } ,
wherein,is thatThe two-dimensional frequency spectrum of (a),is the compensated two-dimensional spectrum, β is the lower viewing angle of the sub-aperture, frIs the range frequency;
self-focusing step 6.3: for compensated two-dimensional frequency spectrumAnd performing two-dimensional inverse Fourier transform to complete the compensation of the ith-level kth sub-aperture.
10. The FFBP SAR imaging-based autofocus method of claim 2, wherein:
in step 6, the polar coordinate image finally obtained for each pulse block is subjected to additional autofocus processing, which is to fuse two adjacent pulse blocks into a higher resolution image, and then estimate and compensate for the error thereon. The reason for this is that the long synthetic aperture helps to estimate the lower frequency error.
The specific steps of the additional auto-focus 0 th cycle are as follows:
step 6.1(0) storing the polar image obtained from the first pulse block in a variableIn the method, a global azimuth position corresponding to the sub-aperture is calculatedAnd the corresponding global phase gradient of the sub-apertureSetting all the components to zero;
step 6.2(0) resampling the polar coordinate image obtained by the next pulse block to the aperture center of the previous pulse block, and storing the aperture center in a variablePerforming the following steps;
step 6.3(0) Using the currently processed sub-aperture imageAnd is stored inThe average phase gradient of the aperture is estimated from the last adjacent sub-aperture image in the array and recorded in the global phase gradient arrayPerforming the following steps; meanwhile, the global azimuth position array is processed according to the currently processed sub-aperture center position and the aperture lengthCarrying out assignment; integrating the phase gradient in the subaperture interval and storing the integrated phase gradient inPerforming the following steps; by usingError compensation is performed on the pulse block image, and then the current pulse block image is processedIs assigned toThen step 6.4(0) is performed;
step 6.4(0) judging whether the 0 th cycle is the last cycle, if so, converting the sub-aperture image obtained by the pulse block to a rectangular coordinate system and accumulating the sub-aperture image to a final image; if not, repeating the step 6.2(0) to the step 6.4(0) until the processing of all the pulse blocks of the 0 th cycle is completed.
The specific steps of the additional autofocus ith cycle (i ≠ 0) are as follows:
step 6.1(i) judges whether the ith-1 th sub-aperture image is the first sub-aperture for fusing the ith-level kth sub-aperture image, if so, calculates the dimensionality of the fused imageCosine of beam angleAnd the fused aperture center position, then step 6.2 (i); if not, directly executing the step 6.2 (i);
step 6.2(i) resampling the ith-1 level jth sub-aperture image to a sub-aperture coordinate system of the ith level kth sub-aperture, and accumulating the resampled image to the ith level kth sub-aperture image;
step 6.3(i) determines whether the j-th sub-aperture at the i-1 th level is the last sub-aperture for fusing the k-th sub-aperture at the i-th level, if so, step 6.4(i) is executed, and if not, step 6.1(i) is executed by making j equal to j + 1;
step 6.4(i) judging whether the ith-level kth sub-aperture image is the ith-level global first sub-aperture image, if so, storing the image in a variableIn the method, a global azimuth position corresponding to the sub-aperture is calculatedAnd the corresponding global phase gradient of the sub-apertureAll zeroes are then performed, step 6.5 (i); if not, saving the sub-aperture image currently processed inIn, then withEstimating the mean constant variation of the phase gradient of the subaperture image of (1)Thereby obtainingValue, integral toCompensating the error to obtain the compensated ith-level kth sub-aperture image, and converting the current image into a plurality of sub-aperture imagesIs assigned toThen step 6.5(i) is performed;
6.5(i) judging whether the ith cycle is the last cycle, if so, converting the ith-level kth sub-aperture image to a rectangular coordinate system and accumulating the ith-level kth sub-aperture image to a final image, and if not, storing the image for subsequent processing; then, let k be k +1 and j be j +1, and repeat steps 6.1(i) to 6.4(i) until the processing of all sub apertures of the i-1 th stage is completed.
CN201610177551.4A 2016-03-23 2016-03-23 A kind of self-focusing method based on FFBP SAR imagings Active CN105842694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610177551.4A CN105842694B (en) 2016-03-23 2016-03-23 A kind of self-focusing method based on FFBP SAR imagings

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610177551.4A CN105842694B (en) 2016-03-23 2016-03-23 A kind of self-focusing method based on FFBP SAR imagings

Publications (2)

Publication Number Publication Date
CN105842694A true CN105842694A (en) 2016-08-10
CN105842694B CN105842694B (en) 2018-10-09

Family

ID=56583381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610177551.4A Active CN105842694B (en) 2016-03-23 2016-03-23 A kind of self-focusing method based on FFBP SAR imagings

Country Status (1)

Country Link
CN (1) CN105842694B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239740A (en) * 2017-05-05 2017-10-10 电子科技大学 A kind of SAR image automatic target recognition method of multi-source Fusion Features
CN107272000A (en) * 2017-07-20 2017-10-20 中国科学院电子学研究所 Slide the computational methods of scattering center orientation phase error
CN107544068A (en) * 2017-07-14 2018-01-05 电子科技大学 A kind of image area synthetic wideband method based on frequency domain BP
CN108205135A (en) * 2018-01-22 2018-06-26 西安电子科技大学 The radar video imaging method of quick rear orientation projection is merged based on no interpolation
CN109716773A (en) * 2016-06-24 2019-05-03 株式会社Kt Method and apparatus for handling vision signal
CN110095775A (en) * 2019-04-29 2019-08-06 西安电子科技大学 The platform SAR fast time-domain imaging method that jolts based on mixed proportion
CN110554385A (en) * 2019-07-02 2019-12-10 中国航空工业集团公司雷华电子技术研究所 Self-focusing imaging method and device for maneuvering trajectory synthetic aperture radar and radar system
CN110609282A (en) * 2019-09-19 2019-12-24 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional target imaging method and device
CN111537999A (en) * 2020-03-04 2020-08-14 云南电网有限责任公司电力科学研究院 Robust and efficient decomposition projection automatic focusing method
CN111736151A (en) * 2020-06-16 2020-10-02 西安电子科技大学 Improved FFBP imaging method for efficient global rectangular coordinate projection fusion
CN111999734A (en) * 2020-08-28 2020-11-27 中国电子科技集团公司第三十八研究所 Broadband strabismus bunching SAR two-step imaging method and system
CN112184643A (en) * 2020-09-21 2021-01-05 北京理工大学 Non-parametric SAR image self-adaptive resampling method
CN112946645A (en) * 2021-01-29 2021-06-11 北京理工大学 Unmanned aerial vehicle-mounted ultra-wideband SAR self-focusing method
CN113176570A (en) * 2021-04-21 2021-07-27 北京航空航天大学 Squint SAR time domain imaging self-focusing method
CN113221062A (en) * 2021-04-07 2021-08-06 北京理工大学 High-frequency motion error compensation algorithm of small unmanned aerial vehicle-mounted BiSAR system
CN113552564A (en) * 2021-06-23 2021-10-26 南昌大学 SAR time domain rapid imaging method, system, terminal and application for complex terrain scene
CN115291211A (en) * 2022-08-05 2022-11-04 中国电子科技集团公司第三十八研究所 Circular track SAR full-aperture imaging method without scaler

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430380A (en) * 2008-12-19 2009-05-13 北京航空航天大学 Large slanting view angle machine-carried SAR beam bunching mode imaging method based on non-uniform sampling
US20110133983A1 (en) * 2006-08-15 2011-06-09 Connell Scott D Methods for two-dimensional autofocus in high resolution radar systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110133983A1 (en) * 2006-08-15 2011-06-09 Connell Scott D Methods for two-dimensional autofocus in high resolution radar systems
CN101430380A (en) * 2008-12-19 2009-05-13 北京航空航天大学 Large slanting view angle machine-carried SAR beam bunching mode imaging method based on non-uniform sampling

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
C.V.JAKOWATZ等: ""Considerations for autofocus of spotlight-mode SAR imagery created using a beamforming algorithm"", 《PROCEEDINGS OF SPIE》 *
LMH ULANDER等: ""Synthetic-aperture radar processing using fast factorized back-projection"", 《IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS》 *
李浩林等: ""快速分解后向投影SAR成像的自聚焦算法研究"", 《电子与信息学报》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11234015B2 (en) 2016-06-24 2022-01-25 Kt Corporation Method and apparatus for processing video signal
US12063384B2 (en) 2016-06-24 2024-08-13 Kt Corporation Method and apparatus for processing video signal
CN109716773A (en) * 2016-06-24 2019-05-03 株式会社Kt Method and apparatus for handling vision signal
CN109716773B (en) * 2016-06-24 2021-10-29 株式会社Kt Method and apparatus for processing video signal
CN107239740A (en) * 2017-05-05 2017-10-10 电子科技大学 A kind of SAR image automatic target recognition method of multi-source Fusion Features
CN107239740B (en) * 2017-05-05 2019-11-05 电子科技大学 A kind of SAR image automatic target recognition method of multi-source Fusion Features
CN107544068A (en) * 2017-07-14 2018-01-05 电子科技大学 A kind of image area synthetic wideband method based on frequency domain BP
CN107272000B (en) * 2017-07-20 2020-08-07 中国科学院电子学研究所 Method for calculating azimuth phase error of sliding scattering center
CN107272000A (en) * 2017-07-20 2017-10-20 中国科学院电子学研究所 Slide the computational methods of scattering center orientation phase error
CN108205135A (en) * 2018-01-22 2018-06-26 西安电子科技大学 The radar video imaging method of quick rear orientation projection is merged based on no interpolation
CN108205135B (en) * 2018-01-22 2022-03-04 西安电子科技大学 Radar video imaging method based on non-interpolation fusion fast backward projection
CN110095775A (en) * 2019-04-29 2019-08-06 西安电子科技大学 The platform SAR fast time-domain imaging method that jolts based on mixed proportion
CN110095775B (en) * 2019-04-29 2023-03-14 西安电子科技大学 Hybrid coordinate system-based bump platform SAR (synthetic Aperture Radar) rapid time domain imaging method
CN110554385A (en) * 2019-07-02 2019-12-10 中国航空工业集团公司雷华电子技术研究所 Self-focusing imaging method and device for maneuvering trajectory synthetic aperture radar and radar system
CN110554385B (en) * 2019-07-02 2022-10-28 中国航空工业集团公司雷华电子技术研究所 Self-focusing imaging method and device for maneuvering trajectory synthetic aperture radar and radar system
CN110609282A (en) * 2019-09-19 2019-12-24 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional target imaging method and device
CN111537999A (en) * 2020-03-04 2020-08-14 云南电网有限责任公司电力科学研究院 Robust and efficient decomposition projection automatic focusing method
CN111537999B (en) * 2020-03-04 2023-06-30 云南电网有限责任公司电力科学研究院 Robust and efficient decomposition projection automatic focusing method
CN111736151B (en) * 2020-06-16 2022-03-04 西安电子科技大学 Improved FFBP imaging method for efficient global rectangular coordinate projection fusion
CN111736151A (en) * 2020-06-16 2020-10-02 西安电子科技大学 Improved FFBP imaging method for efficient global rectangular coordinate projection fusion
CN111999734A (en) * 2020-08-28 2020-11-27 中国电子科技集团公司第三十八研究所 Broadband strabismus bunching SAR two-step imaging method and system
CN111999734B (en) * 2020-08-28 2022-02-08 中国电子科技集团公司第三十八研究所 Broadband strabismus bunching SAR two-step imaging method
CN112184643A (en) * 2020-09-21 2021-01-05 北京理工大学 Non-parametric SAR image self-adaptive resampling method
CN112184643B (en) * 2020-09-21 2022-11-08 北京理工大学 Non-parametric SAR image self-adaptive resampling method
CN112946645B (en) * 2021-01-29 2022-10-14 北京理工大学 Unmanned aerial vehicle-mounted ultra-wideband SAR self-focusing method
CN112946645A (en) * 2021-01-29 2021-06-11 北京理工大学 Unmanned aerial vehicle-mounted ultra-wideband SAR self-focusing method
CN113221062A (en) * 2021-04-07 2021-08-06 北京理工大学 High-frequency motion error compensation algorithm of small unmanned aerial vehicle-mounted BiSAR system
CN113221062B (en) * 2021-04-07 2023-03-28 北京理工大学 High-frequency motion error compensation method of small unmanned aerial vehicle-mounted BiSAR system
CN113176570A (en) * 2021-04-21 2021-07-27 北京航空航天大学 Squint SAR time domain imaging self-focusing method
CN113552564A (en) * 2021-06-23 2021-10-26 南昌大学 SAR time domain rapid imaging method, system, terminal and application for complex terrain scene
CN115291211A (en) * 2022-08-05 2022-11-04 中国电子科技集团公司第三十八研究所 Circular track SAR full-aperture imaging method without scaler
CN115291211B (en) * 2022-08-05 2024-08-30 中国电子科技集团公司第三十八研究所 Circular SAR full-aperture imaging method without scaler

Also Published As

Publication number Publication date
CN105842694B (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN105842694B (en) A kind of self-focusing method based on FFBP SAR imagings
Neo et al. Processing of azimuth-invariant bistatic SAR data using the range Doppler algorithm
US7663529B2 (en) Methods for two-dimensional autofocus in high resolution radar systems
CN105974414B (en) High-resolution Spotlight SAR Imaging autohemagglutination focusing imaging method based on two-dimentional self-focusing
Mao et al. Polar format algorithm wavefront curvature compensation under arbitrary radar flight path
CN108205135B (en) Radar video imaging method based on non-interpolation fusion fast backward projection
Mao et al. Autofocus correction of APE and residual RCM in spotlight SAR polar format imagery
CN101685159B (en) Method for constructing spaceborne SAR signal high precision phase-keeping imaging processing platform
CN104251990B (en) Synthetic aperture radar self-focusing method
CN102645651A (en) SAR (synthetic aperture radar) tomography super-resolution imaging method
CN105116411B (en) A kind of bidimensional self-focusing method suitable for range migration algorithm
CN106802416A (en) A kind of quick factorization rear orientation projection SAR self-focusing methods
CN112882030B (en) InSAR imaging interference integrated processing method
Li et al. A novel CFFBP algorithm with noninterpolation image merging for bistatic forward-looking SAR focusing
CN113702974A (en) Method for quickly optimizing airborne/missile-borne synthetic aperture radar image
CN102043142A (en) Polar coordinate wave-front curvature compensation method of synthetic aperture radar based on digital spotlight
CN110109104B (en) Array SAR (synthetic aperture radar) equidistant slice imaging geometric distortion correction method
WO2017154125A1 (en) Synthetic-aperture-radar signal processing device
CN103792534B (en) SAR two-dimension autofocus method based on prior phase structure knowledge
Yang et al. A new fast back-projection algorithm using polar format algorithm
CN113608218B (en) Frequency domain interference phase sparse reconstruction method based on back projection principle
Zhang et al. A two-stage time-domain autofocus method based on generalized sharpness metrics and AFBP
Yang et al. An optimal polar format refocusing method for bistatic SAR moving target imaging
Zhang et al. An improved time-domain autofocus method based on 3-D motion errors estimation
CN117129994A (en) Improved backward projection imaging method based on phase compensation nuclear GNSS-SAR

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant