CN105842694B - A kind of self-focusing method based on FFBP SAR imagings - Google Patents
A kind of self-focusing method based on FFBP SAR imagings Download PDFInfo
- Publication number
- CN105842694B CN105842694B CN201610177551.4A CN201610177551A CN105842694B CN 105842694 B CN105842694 B CN 105842694B CN 201610177551 A CN201610177551 A CN 201610177551A CN 105842694 B CN105842694 B CN 105842694B
- Authority
- CN
- China
- Prior art keywords
- aperture
- sub
- image
- self
- focusing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
- G01S13/9004—SAR image acquisition techniques
- G01S13/9019—Auto-focussing of the SAR signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
- G01S13/9004—SAR image acquisition techniques
- G01S13/9017—SAR image acquisition techniques with time domain processing of the SAR signals in azimuth
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
- G01S13/904—SAR modes
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The present invention provides a kind of self-focusing method based on FFBP SAR imagings, including eight steps, the final SAR image for obtaining vernier focusing.Beneficial technique effect:The present invention is based on FFBP frames, the phase gradient information of the phase information extraction kinematic error using point target adjacent sub-aperture is proposed, then the kinematic error is estimated by the method for integral and is compensated, realize the Autofocus processing of high-precision SAR image.Compared with existing method, this method is not influenced by error exponent number, can estimate the kinematic error of arbitrary order, substantially increases the robustness of autofocus algorithm;Meanwhile this method extracts control information using the phase difference of adjacent sub-aperture, avoids computing repeatedly for nested overlapping sub-aperture routing method, without large-scale Optimizing Search is carried out, significantly reduces operand, accelerates imaging time.
Description
Technical field
The invention belongs to SAR field of signal processing, are related to the imaging of synthetic aperture radar (SAR), are related specifically to
The Autofocus processing of SAR image, specially a kind of self-focusing method based on FFBP SAR imagings.
Background technology
SAR imaging algorithms are basically divided into time domain imaging algorithms and frequency domain imaging algorithm two major classes.Frequency domain imaging algorithm is
Using the feature that the Doppler frequency characteristic of target in scene is same or similar, carried out in distance-Doppler domain or two-dimensional frequency
The method of unified compression.Since this method disposably compresses all targets, computational efficiency is very high.It is common
Frequency domain imaging method have distance-Doppler (Range Doppler, RD) algorithm, CS (Chirp Scaling) algorithm, wave number
Common spectrum analysis (SPECAN) algorithm in domain (ω-k) algorithm and ScanSAR.These frequency domain imaging methods are mainly needle
Grow up to stripmap SAR.For different operating modes, it has been suggested that some special imaging algorithms, such as poly-
Beam SAR uses polar format algorithm (Polar Format Algorithm, PFA) to carry out imaging earliest.All frequencies
Domain algorithm is required for echo-signal transforming to orientation frequency domain (except SPECAN methods), and therefore, the premise of algorithm is that signal exists
Orientation frequency domain not aliasing.But for pack, Sliding spotlight SAR etc., pulse recurrence frequency (the Pulse Repeat of radar system
Frequency, PRF) the full doppler bandwidth in orientation can not possibly be higher than, thus, azimuth spectrum aliasing is unavoidable.It is needed when imaging
Azimuth spectrum aliasing is solved, or using orientation sub-aperture image to avoid the spectral aliasing.This processing, positive side optionally
Under it is feasible, but for squinting situation greatly, in the case of especially spaceborne big strabismus, ultrahigh resolution, due to distance and side
Close coupling between position so that there are serious space-variant phases for entire imaging region, and traditional frequency domain imaging method is difficult solution
Certainly the problem of orientation space-variant.Time-domain imaging method, as rear orientation projection (Back Projection, BP) algorithm can be well
It solves the problems, such as to squint greatly range-azimuth under image-forming condition to couple, is suitable for the complexity such as big coherent accumulation angle or non-linear flight path
Image reconstruction under imaging geometry, and geometric distortion is not present in the image after focusing.But since BP algorithm is node-by-node algorithm, to big
Calculation amount is huge when scene progress high-resolution imaging, and efficiency far is less than frequency domain algorithm.In order to reduce the fortune of BP algorithm
Calculation amount, researchers propose many fast back projection algorithms.Wherein, FFBP (Fast Factorized Back-
Projection) algorithm is the most commonly used one of algorithm.
FFBP algorithms are the fast algorithms proposed for BP algorithm calculation amount redundancy.Its cardinal principle is, due to angle
Domain resolution ratio is proportional to sub-aperture electrical path length, for the sub-aperture that the starting stage divides, angular domain bandwidth very little, according to
Nyquist Sampling Theorems, corresponding angular domain sample rate can be very low, and therefore, less angular domain sampling number can obtain thick angle
Domain image in different resolution, i.e., It is not necessary to by the corresponding data rear orientation projection of each aperture location to each pixel of imaging grid
Point.Using this thought as design basis, FFBP algorithms establish the algorithm structure of similar butterfly computation, and computational complexity is
N2LogN, close to frequency domain algorithm.Meanwhile FFBP algorithms still remain the advantages of traditional BP imaging, i.e. segmentation is imaged and is convenient for
The characteristics of compensation.Therefore, FFBP algorithms can substitute the imaging situations that traditional BP algorithm is competent at big data scale.
For time-domain imaging method, as long as the position of platform is accurate, so that it may with Exact Reconstruction ground scene.But it is actual
In SAR data acquisition process, the precision of inertial navigation system only up to reach 2-10cm.And for the superelevation of 0.1m or so point
Resolution SAR imagings, it is desirable that the positional precision of radar phase center reaches millimeter magnitude, this is for current inertial navigation system can not
It realizes.And caused by the limitation of inertial navigation system precision residual motion error will orientation generate it is serious defocus,
It can not even be imaged.For a long time, the attention of the researchers in the fields SAR excessively concentrates on frequency domain imaging algorithm and its autohemagglutination
On burnt algorithm, and it is less to the concern of time domain autofocus algorithm, cause the achievement in research in the current field poorer.And time domain
The vernier focusing of imaging algorithm is easy to motion compensation, and the characteristics of without earth's surface geometric distortion so that time domain imaging algorithms are one
A little emerging SAR imaging fields, such as circular track SAR, ultra wide band (UWB) SAR, dual station SAR, have unique advantage.Also,
With the fast development of parallel computing, perplexing the operand problem of time domain imaging algorithms and its fast algorithm for a long time just
It is eliminating, SAR researchers also tend to the means using time-domain imaging gradually.In recent years, more and more related fields
Article is delivered, but most simple application levels for being also merely resting on the algorithm, for Time-Domain algorithm especially FFBP algorithms
The research of Autofocus processing technology is still seldom.
The self-focusing majority in relation to BP algorithm is all the thinking focused using optimization in document, establishes assessing image focus
Can cost function, and carry out by way of Optimization Solution the estimation of kinematic error parameter, but such algorithm is related to higher-dimension ginseng
Several Optimization Solutions, operand is very big, and applicability is very restricted.Charles V.Jakowatz et al. are pointed out, are passed
The frequency domain self-focusing method of system, as Phase gradient autofocus (Phase Gradient Autofocus, PGA) algorithm can not be straight
It connects for imaging in the BP algorithm of geospatial coordinates system, because the phase history domain after image area and Range compress can not
Establish Fourier transform pairs relationship.It points out simultaneously, under the conditions of small angular domain, orientation angles are with time-domain similar to Fourier transformation
To relationship, thus, they propose a kind of solution, i.e., first adjust the distance-azimuthal coordinate system under BP imaging results into
Image, is then transformed under rectangular coordinate system by row PGA processing again, and utilizes the effective of the method validation of half emulation this method
Property (refer to " C.V.Jakowatz, Jr.and D.E.Wahl, Considerations for autofocus of
spotlight-mode SAR imagery created using a beamforming algorithm,Proc.of
SPIE,Vol.7337,pp.73370A-1-73370A-9.”).Lei Zhang et al. propose to combine multiple aperture in FFBP imagings
The method of image shift (Multiple Aperture Map Drift, MAMD) estimates the Doppler frequency modulation rates of every grade of image
(refer to " Lei Zhang et al, Integrating autofocus techniques with fast factorized
back-projection for high resolution spotlight SAR imaging,IEEE Geoscience and
Remote Sensing Letters,2013,10(6):1394-1398. "), but this method needs once to imaging data
All pulses are handled, and preserve the information of adjacent all subgraphs of two-stage imaging process, many for imaging pulse number
Occasion, the memory needed for this method is very huge.It is of course also possible to all intermediate image results are output in hard disk,
It is read again when needing, but the time of many data input and output can be increased again in this way.Moreover, this method is to pass through order of a polynomial
It counts to control the precision of estimation, exponent number more high-precision is better, but the increase of exponent number can lead to the decline for estimating performance.Li Haolin etc.
The method that people proposes FFBP algorithm combination multiple aperture Phase gradient autofocus carries out imaging (with reference to " Li Haolin etc., phase ladder
Application of the degree self-focusing in the processing of FFBP Spotlight SAR Imagings, Xian Electronics Science and Technology University's journal (natural science edition), 2014,41
(3):26-32. "), but this method not only needs all pulses simultaneously to imaging data to handle, it is also necessary to adjacent two
Sub-aperture image needs the overlapping of half, just can guarantee that the phase error of PGA estimations is continuous in orientation in this way.This is not
But it can cause to also greatly increase redundant computation by about one time the complexity of algorithm.
Invention content
The present invention is directed to the existing deficiency based on FFBP Imaging Autofocus Algorithms, it is proposed that a kind of adjacent using point target
The method that sub-aperture phase information carries out Autofocus processing.It establishes adjacent sub-aperture first derivative by the phase information
Between relationship, then the method by once integrating estimates residual motion control information and compensates, avoid sub-aperture overlapping and
Its follow-up complicated calculating.Also, orientation pulse can be divided into different pulse packets by the algorithm, and pulse packet is according to successively suitable
Sequence is imaged successively, that is, first pulse packet first carries out the imaging under polar format, is carried out under reconvert to rectangular coordinate system
Image co-registration, the estimation and compensation of kinematic error are completely embedded into the operations at different levels of FFBP imagings, while completing SAR imagings
Also Autofocus processing is completed.After first pulse packet completes imaging, second pulse packet is reprocessed, is gone on successively.This
The benefit that sample is done is need not to store the intermediate result of all sub-apertures, as long as and storing and being tied among the subgraph in the pulse packet
Memory is greatly saved in fruit.Compared to the Autofocus processing of existing BP imagings, this method is not necessarily to large-scale Optimizing Search, and
And it inherits the framework of FFBP imagings completely, the increase of operand is very small.In addition, due to the azimuth resolution of FFBP imagings
With the changing rule of image recursive procedure from low to high so that the self-focusing method based on FFBP of being somebody's turn to do is very suitable for processing and crosses over
The image defocus of range cell.
The present invention is specific as follows:
A kind of self-focusing method based on FFBP SAR imagings, it is characterised in that:Including eight data processing modules:
The module that SAR system parameter and observation scenario parameters are initialized.
The module of setting FFBP imaging parameters, interpolation kernel, self-focusing parameter.
Raw radar data is divided, pulse packet data, the module compressed into row distance to pulse are read.
Pulse packet data are carried out with the module of the 0th grade of imaging of FFBP.
Pulse packet data are carried out with the module of FFBP i-stage imagings.
Judge whether to need to execute the module for adding Autofocus processing.
The module that polar coordinate image is transformed into rectangular coordinate system, and is added in final SAR image.
Judge the module whether all pulse packets are disposed.
Furtherly, a kind of self-focusing method based on FFBP SAR imagings, specifically carries out as follows:
Step 1 is (corresponding with " module initialized to SAR system parameter and observation scenario parameters ".):To SAR systems
System parameter and observation scenario parameters are initialized.
Imaging rectangular coordinate system and its origin are determined according to the location information of inertial navigation system output and beam position information.Just
The SAR system parameter of beginningization includes:Radar platform position vector (x(0),y(0),z(0)), platform flying speed V, radar operating wave
The time width T of long λ, the linear FM signal bandwidth B r of radar emission, radar transmitted pulsep, radar pulse repetition frequency PRF, return
Pitch of waves descriscent sample frequency Fs, the aerial spread speed C of electromagnetic wave, radar return data matrix.The radar return number
According to matrix include distance counts L to the slow time sampling of fast time sampling points K, orientation.
Observing scenario parameters includes:The orientation of scene and distance to sampling interval dx, dy, orientation and distance to adopt
Plane coordinates position (the x of number of samples M, N and scene centerc_scene,yc_scene).That is, (m, n) surface units in scene
Plane coordinates can be expressed as (xc_scene+m·dx,yc_scene+ ndy), wherein-M/2≤m<M/2 ,-N/2≤n<N/2.Often
The coordinate of the vertical direction of a ground pixel by externally input digital elevation model (Digital Elevation Model,
DEM it) obtains, or using the average reference elevation h in image scene regionref。
Step 2 is (corresponding with the module of interpolation kernel, self-focusing parameter " setting FFBP imaging parameters, ".):Set FFBP at
As parameter, interpolation kernel, self-focusing parameter.
FFBP imaging parameters are the umber of pulse of pulse block number BlkNum, each pulse packetSeries LevelNum and every
The factoring n of level-one(i).The sub-aperture number and azimuth beam number that imaging is handled per level-one are determined by FFBP imaging parametersInterpolation kernel is imaged method used by middle-range descriscent and orientation interpolation, wherein distance is used to interpolation for FFBP
Frequency domain zero padding interpolation, orientation use Knab interpolation.
Self-focusing parameter, is the cycle-index of Autofocus processing, and 3 use opened up by every level-one Autofocus processing
In the global array of storage, respectively:Position of orientation overall situation array globalPos, phase gradient overall situation array
GlobalPhaseGradient, integral phase overall situation array globalPhase.
Step 3 is (with " dividing raw radar data, read pulse packet data, the module compressed into row distance to pulse " phase
It is corresponding.):The umber of pulse of the pulse block number BlkNum, pulse packet that are set by step 2Divide raw radar data.Read one
Pulse packet data enter step 4 after being compressed into row distance to pulse.
Step 4 is (corresponding with " module that pulse packet data are carried out with the 0th grade of imaging of FFBP ".):To the pulse packet data
The 0th grade of imaging for carrying out FFBP, subsequently enters step 5.
Step 5 is (corresponding with " module that pulse packet data are carried out with FFBP i-stage imagings ".):To the pulse packet data
Carry out the i-stage imaging of FFBP.
When sub-aperture image all in the pulse packet is all fused into a width polar coordinate image, 6 are entered step.
Step 6 is (corresponding with " judging whether to need to execute the module for adding Autofocus processing ".):Judge whether to need to hold
The additional Autofocus processing of row:If so, entering step 7 after executing additional self-focusing step.If it is not, entering step 7.
Step 7 is (with " module that polar coordinate image is transformed into rectangular coordinate system, and is added in final SAR image "
It is corresponding.):The polar coordinate image obtained by step 6 is transformed into rectangular coordinate system, and the rectangular co-ordinate that the pulse packet is obtained
Image is added in final SAR image.
Whether step 8 is (corresponding with " judge that all pulse packets are disposed module ".):Judge to draw by step 3 method
Whether the pulse packet data divided are disposed:
If there are pulse packet data not processed, a not processed pulse packet data are read, into row distance to pulse
After compression, return to step 4.
If pulse packet data have been processed, end operation.
Beneficial technique effect
The present invention is based on FFBP frames, propose to extract kinematic error using the phase information of point target adjacent sub-aperture
Phase gradient information, then the kinematic error is estimated by the method for integral and is compensated, realize the self-focusing of high-precision SAR image
Processing.Compared with existing method, this method is not influenced by error exponent number, can be estimated the kinematic error of arbitrary order, be carried significantly
The high robustness of autofocus algorithm.Meanwhile this method extracts control information using the phase difference of adjacent sub-aperture, avoids embedding
Set overlapping sub-aperture routing method computes repeatedly, and without large-scale Optimizing Search is carried out, significantly reduces operand, accelerates
Imaging time.Moreover, this method inherits FFBP Imager Architectures completely, without carrying out big change to original Imager Architecture
Self-focusing can be completed.Each pulse packet is imaged successively according to sequencing, without storing the intermediate image of all pulse packets, significantly
Reduce demand of the algorithm to device memory.In addition, different from having method, this method is sat when FFBP is imaged using ground pole
Mark system generates polar coordinates subgraph, is more advantageous in this way in self-focusing and introduces external elevation, improves the precision of self-focusing estimation,
But also this method application with bigger in the SAR Autofocus processings of ultrahigh resolution or the violent rugged country of landform is latent
Power.
Description of the drawings
Fig. 1 is the flow diagram of the present invention.
Fig. 2 is the ground polar coordinate system schematic diagram that FFBP is imaged in the present invention.
Fig. 3 is the simulation parameter of airborne Spotlight SAR Imaging system.
Fig. 4 is the sinusoidal error that amplitude is 0.05m, the period is 10s.
Fig. 5 is the imaging results under sinusoidal error shown in Fig. 4.
Fig. 6 is the phase gradient of the residual motion error of self-focusing cycle estimation for the first time.
Fig. 7 is the result after the kinematic error integral estimated Fig. 6 (after eliminating global linear term).
Fig. 8 is the imaging results obtained using the method for the present invention.
Fig. 9 is the image carried out to Fig. 8 after 32 times of interpolation.
Figure 10 is the distance that is obtained using the method for the present invention to point target response.
Figure 11 is the orientation point target response obtained using the method for the present invention.
Specific implementation mode
Referring to Fig. 1, a kind of self-focusing method based on FFBP SAR imagings includes the following steps:
Step 1:SAR system parameter and observation scenario parameters are initialized.
Imaging rectangular coordinate system and its origin are determined according to the location information of inertial navigation system output and beam position information.Just
The SAR system parameter of beginningization includes:Radar platform position vector (x(0),y(0),z(0)), platform flying speed V, radar operating wave
The linear FM signal bandwidth B of long λ, radar emissionr, radar transmitted pulse time width Tp, radar pulse repetition frequency PRF, echo
Distance is to sample frequency Fs, the aerial spread speed C of electromagnetic wave, radar return data matrix.The radar return data
Matrix include distance to the slow time sampling points L, K and L of fast time sampling points K, orientation be positive integer.In radar platform
Position vector (x(0),y(0),z(0)) in, x(0)For position of the radar platform on horizontal plane X direction, y(0)It is radar platform in water
Position on plane vertical axis, z(0)For the radar platform position upward in ground vertical height.
Observing scenario parameters includes:The orientation of scene and distance to sampling interval dx, dy, orientation and distance to adopt
Plane coordinates position (the x of number of samples M, N and scene centerc_scene,yc_scene).That is, (m, n) surface units in scene
Plane coordinates can be expressed as (xc_scene+m·dx,yc_scene+ ndy), wherein-M/2≤m<M/2 ,-N/2≤n<N/2.Often
The coordinate of the vertical direction of a ground pixel is obtained by externally input DEM, or using the average reference in image scene region
Elevation href。
Step 2:Set FFBP imaging parameters, interpolation kernel, self-focusing parameter.
FFBP imaging parameters are the umber of pulse of pulse block number BlkNum, each pulse packetSeries LevelNum and every
The factoring n of level-one(i).The sub-aperture number and azimuth beam number that imaging is handled per level-one are determined by FFBP imaging parameters
Interpolation kernel is imaged method used by middle-range descriscent and orientation interpolation, wherein distance is used to interpolation for FFBP
Frequency domain zero padding interpolation, orientation use Knab interpolation.
Self-focusing parameter, is the cycle-index of Autofocus processing, and 3 use opened up by every level-one Autofocus processing
In the global array of storage, respectively:Position of orientation overall situation array globalPos, phase gradient overall situation array
GlobalPhaseGradient, integral phase overall situation array globalPhase.
Step 3:The umber of pulse of the pulse block number BlkNum, pulse packet that are set by step 2Divide raw radar data.
A pulse packet data are read, after being compressed into row distance to pulse, enter step 4.
Step 4:The 0th grade of imaging that the pulse packet data are carried out with FFBP, subsequently enters step 5.
Step 5:The pulse packet data are carried out with the i-stage imaging of FFBP.
When sub-aperture image all in the pulse packet is all fused into a width polar coordinate image, 6 are entered step.
Step 6:Judge whether to need to execute to add Autofocus processing:Self-focusing step is added if so, executing.If it is not,
Enter step 7.
Step 7:The polar coordinate image obtained by step 5 or step 6 is transformed into rectangular coordinate system, and the pulse packet is obtained
To rectangular co-ordinate image be added in final SAR image.
Step 8:Judge whether the pulse packet data divided by step 3 method are disposed:
If there are pulse packet data not processed, a not processed pulse packet data are read, into row distance to pulse
After compression, return to step 4.
If pulse packet data have been processed, end operation.
Furtherly, in step 1, it when the resolution ratio of SAR image is meter level or lower, or (cannot be used here that is, below
Be different situation with front.Equally, next section of " i.e. " can not be used) imaging region it is relatively flat when, ground pixel
The coordinate of vertical direction uses the average reference elevation h in image scene regionref, i.e., the elevation of all surface units is regarded as
Average reference elevation.When resolution ratio is very high, especially when the resolution ratio of SAR image is not high better than 0.2m or resolution ratio, but it is imaged
When region landform acutely rises and falls, the coordinate of ground pixel vertical direction is provided by DEM.
Referring to Fig. 1, furtherly, in step 4, the specific method that pulse packet data are carried out with the 0th grade of imaging of FFBP is:
Step 4.1 establishes ground polar coordinate system with polar radius and azimuthal cosine, and the coordinate system parameters are arranged:
According to the plane coordinates at image scene center and orientation, distance to sampling number M, N and its sampling interval dx,
Dy obtains the plane coordinates of four angle points of scene, recycles the projection of the carrier aircraft position of inertial navigation system output on the ground, calculates
In each pulse position scene maximally away fromMinimally away from
Further according to functional expressionAcquire polar coordinate system distance in the pulse position
The sampling number of dimension, wherein int indicates rounding, bin_R be original echoed signals distance to sampling interval, polar coordinate system
Distance samples interval and original echoed signals it is identical.
By all pulsesValue is all set as identical, is maximum in the pulse packetValue.
The maximum cosine value MaxCos of scene and carrier aircraft heading angle theta is acquired in each pulse position(0)And minimum
Cosine value MinCos(0), and according to Azimuth beam numberObtain the included angle cosine value in each beam position:
Wherein,
According to the three-dimensional coordinate of four angle points and carrier aircraft position of scene, calculate the space of carrier aircraft and four angle points of scene away from
From to obtain the maximum oblique distance of scene in each pulse positionWith minimum oblique distancePositive side view or small
When strabismus, minimum oblique distance is nearest oblique distance of the scene to entire flight path.
Step 4.2 is to the maximum oblique distance in all pulse positions in the pulse packetWith minimum oblique distance
Extreme value is sought, the maximum oblique distance of scene on the pulse packet is obtainedWith minimum oblique distanceAnd accordingly original
Intercepted in echo-signal corresponding signal segment into row distance to liter sampling.
Step 4.3 is according to the calculating of step 4.1, dimension of the scene under polar coordinate system in each pulse position
Number isThen utilize the cosine value of azimuth beam angleAnd minimum distanceIt acquires
Plane rectangular coordinates value on polar coordinates sampled point (m, n): Wherein,For polar polar radius, and For the sine value of azimuth beam angle.
Height value on each sampled point passes through plane coordinatesIt is obtained in external DEM files.If without DEM
File, available reference elevation hrefInstead of height value.
Obtain the coordinate on polar coordinates sampled pointAfterwards, according to the carrier aircraft coordinate in each pulse position
(x(0),y(0),z(0)), calculate distance between the twoAnd
The signal at the sampled point is obtained on the echo data of the pulse according to the distance, is sat to obtain the pole under the pulse position
Logo image.
Step 4.4 repeats step 4.1-4.3, completes polar coordinates assignment of the scene in each pulse position, obtains
It is aThe polar coordinate image of dimension.
Referring to Fig. 1, furtherly, in steps of 5, pulse packet data are carried out with the specific method of the i-stage imaging of FFBP
For:
Step 5.1 is according to i-stage factoring n(i)Update the sub-aperture center position coordinates (x of i-stage imaging(i),y(i),z(i)), while updating azimuth beam numberIt is sharedA sub-aperture, whereinIt is (i-1)-th grade
Sub-aperture number.Then calculate each sub-aperture scene maximally away fromMinimally away fromDistance to sampling numberAnd the cosine value of each squint
Into row distance to a liter sampling, the dimension of the image is the image that step 5.2 obtains (i-1)-th grade
K-th of sub-aperture of step 5.3 i-stage is by (i-1)-th grade of n(i)A sub- subaperture image merges to obtain, and corresponding i-th-
1 grade of kth n(i)+ j (j=0,1 ..., n(i)- 1) a subgraphIt is first rightResampling is carried out to obtainIt indicates (i-1)-th grade of kth n(i)Under+j sub- subaperture image resamplings to k-th of sub-aperture coordinate system of i-stage
Image..
Step 5.4 judges whether i-stage will carry out Autofocus processing.If it is required, then execution self-focusing step 1~from
Focus steps 6 (as detailed below).Otherwise, the result after resampling is added up, is assigned to k-th of sub-aperture image of i-stage, i.e.,Wherein,Indicate k-th of sub-aperture image of i-stage.
Step 5.5 repeats step 5.1~5.4, completes the polar coordinates imaging of all sub-apertures of i-stage, obtainsWidthThe polar coordinate image of dimension.
Furtherly, in step 5.3, the process of resampling is specific as follows:Calculate k-th of sub-aperture polar coordinates of i-stage
The rectangular co-ordinate of each sampled point under systemTo calculate the kth n of each sampled point and (i-1)-th grade(i)+
Air line distance between j sub- aperture centersAnd its projector distance on groundAnd azimuthal cosineMeanwhile it can also calculate each sampled point and k-th of sub-aperture centre bit of i-stage
Air line distance between settingAccording toValue existMiddle determining range cell, then the basis in the range cellValue, the signal on each sampled point of k-th of sub-aperture of i-stage after resampling is obtained using Knab interpolation.Finally,
Signal after resampling is multiplied byTo compensate phase, here, λ is the wavelength of radar work.
Referring to Fig. 1, furtherly, in step 5.4, autofocus algorithm is specific as follows:
Self-focusing step 1:All sub-aperture image classifications merged needed for sub-aperture are handled:
Self-focusing step 1.1:If the sub-aperture image of fusion is (i-1)-th grade of image overall, first sub-aperture, i.e. k=
0, j=0, then calculate global position of orientation globalPos corresponding with the sub-aperture(i), and by the corresponding global phase of the sub-aperture
Potential gradient globalPhaseGradient(i)Whole zero setting, while the sub-aperture image is stored in variables L astSubImg(i)
In.Self-focusing step 1 is re-executed again.
Self-focusing step 1.2:If the sub-aperture image of fusion is for synthesizing k-th of sub-aperture image of i-stage
One subgraph, but be not global first sub-aperture, i.e. (i-1)-th grade of kth n(i)A sub-aperture, k ≠ 0, j=0 at this time, then
By the sub-aperture resampling to the aperture center of its previous sub-aperture, that is, resampling is to (i-1)-th grade of kth n(i)- 1
The aperture center of sub-aperture, as current sub-aperture image ThisSubImg(i), then execute self-focusing step 2~self-focusing
Step 4.
Self-focusing step 1.3:If the sub-aperture image of fusion is (i-1)-th grade of kth n(i)+ j sub- subaperture images, i.e. k
≠ 0, j ≠ 0, by the sub-aperture image resampling to the aperture center of its previous sub-aperture, as current sub-aperture figure
As ThisSubImg(i), then execute self-focusing step 2~self-focusing step 4.Self-focusing step 2:Utilize currently processed son
Subaperture image ThisSubImg(i)Be stored in LastSubImg(i)In two sub-aperture of a upper adjacent sub-aperture Image estimation
The average constant of phase gradient changesSubsequently enter self-focusing step 3.
Self-focusing step 3:All by the phase gradient of all aperture locations corresponding to (i-1)-th grade of currently processed sub-aperture
Uniformly it is assigned a value ofAnd it is recorded in global phase gradient array globalPhaseGradient(i)In, whereinFor
The phase gradient of (i-1)-th grade of upper sub-aperture estimation.According to currently processed sub-aperture center and aperture length to complete
Office position of orientation array globalPos(i)Carry out assignment, globalPhaseGradient(i)In phase and globalPos(i)
In position correspond.Utilize current ThisSubImg(i)Update LastSubImg(i)In value, while updating corresponding
Parameter subsequently enters self-focusing step 4.
Self-focusing step 4:Self-focusing step 1~self-focusing step 3 is repeated, is merged needed for k-th of sub-aperture of i-stage
All subgraphs be all disposed, the subgraph after all resamplings is added up, k-th of sub-aperture image of i-stage is assigned to,
I.e.
Self-focusing step 5:After the fusion for completing k-th of sub-aperture image of i-stage, in globalPhaseGradient(i)
In the phase gradient in the sub-aperture section is integrated, and result is saved in globalPhase(i)In, it is used for the aperture
Compensation.globalPhase(i)Also with globalPos(i)In position correspond.Self-focusing step 6:It utilizes
globalPhase(i)Error compensation, the sub-aperture image after being compensated are carried out to k-th of sub-aperture of i-stage.
Referring to Fig. 1, furtherly, in the autofocus algorithm in step 5.4, currently processed sub-aperture image is utilized
ThisSubImg(i)Be stored in LastSubImg(i)In upper two sub-aperture phase gradient of an adjacent sub-aperture Image estimation
Average constant variationSpecific method be:
Self-focusing step 2.1 is in ThisSubImg(i)And LastSubImg(i)Point target, selected point are found in image
Target all necessarily exhibits the characteristic of point target in the two sub-aperture images, it is assumed that the point target number selected forPoint
Pixel coordinate of the target in two adjacent sub-aperture images be allWherein,
Self-focusing step 2.2 is in ThisSubImg(i)And LastSubImg(i)In image, respectively withCentered on
Intercept Na×NrThe complex pattern block of size, wherein Na、NrRespectively orientation and distance are to points, the integral number power that value is 2,
Obtain two plural subimage blocks:With
Self-focusing step 2.3 utilizes the method pair of frequency domain zero paddingWithInto
Row two-dimensional interpolation obtains the result after interpolationWith
Self-focusing step 2.4 existsLocation of pixels where middle search target peak
(mmaxp,nmaxp), it measures at this locationWithPhase
Difference, i.e.,Wherein, phase, a are asked in arg expressionsthis、alastIt is illustrated respectively inWith(m in imagemaxp,nmaxp) at complex values.
Self-focusing step 2.5 repeats self-focusing step 2.2~self-focusing step 2.4 to all point targets, obtains all mesh
Phase difference in cursor position, summation obtain average phase-differenceSo, the two sub-aperture phase gradients
Average constant variable quantityWherein, LsubFor the position of orientation number of (i-1)-th grade of sub-aperture.
Referring to Fig. 1, furtherly, in the autofocus algorithm in step 5.4, the error of k-th of sub-aperture of i-stage is mended
Repay, the sub-aperture image after being compensated the specific steps are:
Self-focusing step 6.1:Two-dimensional Fourier transform is carried out to k-th of sub-aperture of i-stage.
Self-focusing step 6.2:According to the aperture location of the sub-aperture from globalPhase(i)Middle interception corresponding position
Phase Phase(i)For compensating, while the offset of envelope is compensated, the formula of compensation is:
Wherein,It is2-d spectrum,It is the 2-d spectrum after compensation, β is the downwards angle of visibility of the sub-aperture,
frIt is distance to frequency.
Self-focusing step 6.3:To the 2-d spectrum after compensationTwo-dimentional inverse Fourier transform is carried out, is completed to i-stage
The compensation of k-th of sub-aperture.
Furtherly, in step 6, it is carried out at additional self-focusing for the finally obtained polar coordinate image of each pulse packet
Reason, it is adjacent two pulse packet to be fused into higher resolution image, then evaluated error and compensate on it.The reason of so doing
It is long synthetic aperture to help to estimate the error of more low frequency.
Additional the 0th cycle of self-focusing is as follows:
The polar coordinate image that first pulse packet obtains is stored in variable by step 6.1 (0)In, meter
Calculate global position of orientation corresponding with the sub-apertureAnd by the corresponding global phase gradient of the sub-apertureWhole zero setting.
In the polar coordinate image resampling that step 6.2 (0) obtains next pulse block to the aperture of a upper pulse packet
The heart is stored in variableIn.
Step 6.3 (0) utilizes currently processed sub-aperture imageBe stored inIn
The upper adjacent sub-aperture Image estimation aperture average phase gradient, and be recorded in global phase gradient arrayIn.Meanwhile according to currently processed sub-aperture center and aperture length to global orientation position
Set arrayCarry out assignment.Phase gradient in the sub-aperture section is integrated, is stored inIn.It utilizesError compensation is carried out to the pulse packet image, it then will be currentIt is assigned toThen step 6.4 (0) is executed.
Step 6.4 (0) judges whether the 0th cycle is that last time recycles, if so, the sub-aperture that the pulse packet is obtained
Diameter image is transformed into rectangular co-ordinate and fastens, and is added in final image.If it is not, repeating step 6.2 (0)~step 6.4
(0), until completing the 0th processing for recycling all pulse packets.
Additional self-focusing ith cycle (i ≠ 0) is as follows:
Step 6.1 (i) judges whether (i-1)-th grade of j-th of sub-aperture image is for merging k-th of sub-aperture figure of i-stage
First sub-aperture of picture, if so, calculating the dimension of fused imageThe cosine value of squintAnd the aperture center position after fusion, then execute step 6.2 (i).If it is not, directly executing step 6.2
(i)。
Step 6.2 (i) sits the sub-aperture of (i-1)-th grade of j-th of sub-aperture image resampling to k-th of sub-aperture of i-stage
Under mark system, and the image after resampling is added in k-th of sub-aperture image of i-stage.
Step 6.3 (i) judges whether (i-1)-th grade of j-th of sub-aperture is for merging the last of k-th of sub-aperture of i-stage
One sub-aperture if it is not, enabling j=j+1, executes step 6.1 (i) if so, executing step 6.4 (i).
Step 6.4 (i) judges whether k-th of sub-aperture image of i-stage is the global first width sub-aperture image of i-stage, if
It is that the image is stored in variableIn, calculate global position of orientation corresponding with the sub-apertureAnd by the corresponding global phase gradient of the sub-apertureWhole zero setting, then execute
Step 6.5 (i).If it is not, currently processed sub-aperture image is stored inIn, then withIn
Sub-aperture Image estimation phase gradient average constant variationTo obtainValue, integral
It obtainsThe error is compensated, k-th of sub-aperture image of i-stage after being compensated will be currentIt is assigned toThen step 6.5 (i) is executed.
Step 6.5 (i) judges whether ith cycle is that last time recycles, if so, by k-th of sub-aperture figure of i-stage
It fastens, and is added in final image as being transformed into rectangular co-ordinate, if it is not, preserving the image for subsequent processing.Then, it enables
K=k+1, j=j+1 repeat step 6.1 (i)~step 6.4 (i), the processing until completing (i-1)-th grade of all sub-aperture.
Embodiment
The present invention mainly uses the method for emulation experiment to verify, and all steps, conclusion are all in Visual Studio
It is verified on 2010 correct.The invention can be used for band, pack, Sliding spotlight SAR FFBP imaging Autofocus processing, here with
It is emulated for ultrahigh resolution (being better than 0.1m) Spotlight SAR Imaging, Fig. 1 is the flow diagram of the present invention.Specific implementation step is such as
Under:
Step 1:SAR system parameter and observation scene parameter setting.Fig. 2 is ground polar coordinate system used by FFBP algorithms
Schematic diagram, Fig. 3 is the simulation parameter of Spotlight SAR Imaging system.5 point targets, elevation 0m are placed in the immediate vicinity of scene.
Platform the directions y and z be added amplitude be 0.005m, the error for the sinusoidal form that the period is 10s, in the scene heart sight to
Error is shown in Fig. 4, and the result obtained without Autofocus processing under the error condition is shown in Fig. 5.
Step 2:The determination of FFBP imaging parameters and self-focusing parameter and the selection of interpolation kernel.The each pulse of FFBP algorithms
2048 pulses of block, 12 complete pulse packets of coprocessing.7 grades of operations need to be passed through altogether, what preceding factoring is all 4, most
Rear stage is 2, and polar coordinate image is finally transformed into rectangular coordinate system.In order to accurately estimate residual motion error, distance is to making
With 16 times of FFT interpolation, orientation uses 16 point Knab interpolation.Azimuth beam number per level-one is respectively 8,16,64,256,512,
2048、4096.The number of self-focusing cycle is 1+2 times, i.e., in the 6th grade of progress Autofocus processing of FFBP imagings, completes pulse
After the imaging of block, two neighboring pulse packet, which recycles, adds self-focusing method 2 Autofocus processings of progress.
Step 3:The 0th~2047 pulse is read, is compressed into row distance to pulse
Step 4:The 0th grade of imaging of FFBP, is calculated the polar coordinate image of 2048 8 × 4316 dimensions
Step 5:The 1st grade of imaging of FFBP, obtains the polar coordinate image of 512 16 × 4316 dimensions, the 2nd grade of imaging obtains 128
The polar coordinate images of 64 × 4316 dimensions, 3rd level are imaged to obtain the polar coordinate images of 32 256 × 4316 dimensions, and the 4th grade is imaged
The polar coordinate image tieed up to 8 512 × 4316, the 5th grade of imaging obtain the polar coordinate image of 2 2048 × 4316 dimensions, and the 6th is grading
Row Autofocus processing all selects this 5 point targets to carry out the estimation of phase difference per adjacent sub-aperture, the target image block of interception
Size is 64 pixels (orientation) × 8 pixel (distance to), obtains the polar coordinate image of 1 4096 × 4314 dimension.Fig. 6 and Fig. 7
Show the result that FFBP is imaged after the full aperture phase gradient that the 6th grade of Autofocus processing obtains and its integral.
Step 6:After the completion of first pulse packet imaging, the polar coordinate image of 1 4096 × 4314 dimension is obtained, by the image
It is saved in LastSubImg(i)In variable, while initializing correlated variables.Otherwise, by LastSubImg(i)In image and current
The ThisSubImg of processing(i)Image estimates phase gradient between the two pulse packets according to the method for above-mentioned Autofocus processing
Constant variation, obtain the phase gradient on the two apertures, then by once integrating to obtain final phase error, then
It is compensated on image after fusion.
Step 7:The polar coordinate image that step 6 obtains is transformed under rectangular coordinate system, and the pulse packet is obtained straight
Angular coordinate image is added in final SAR image.
Step 8:Step 3~step 7 is repeated, the processing until completing all pulse packets.
Fig. 8 is the point target compression result after the completion of self-focusing, and Fig. 9 is its 32 times images risen after sampling, Figure 10 and figure
11 be the distance that is obtained using the method for the present invention to orientation point target response.It can be seen that being handled by the method for the present invention
Afterwards, the compression quality of point target is greatly improved.It is also possible to find out that this method is not required for error and is confined to list
In a range cell, in the case where error crosses over multiple range cells, it can still obtain satisfied focusing effect.And
And all pulses that the inventive method does not need to once to imaging are handled, but one by one pulse packet in order at
Picture and Autofocus processing, it is huge in umber of pulse, greatly reduce demand of the algorithm to memory.
Claims (4)
1. a kind of self-focusing method based on FFBP SAR imagings, it is characterised in that specific processing step is as follows:
Step 1:SAR system parameter and observation scenario parameters are initialized;
Imaging rectangular coordinate system and its origin are determined according to the location information of inertial navigation system output and beam position information;Initialization
SAR system parameter include:Radar platform position vector (x(0),y(0),z(0)), platform flying speed V, radar operation wavelength λ,
The linear FM signal bandwidth B of radar emissionr, radar transmitted pulse time width Tp, radar pulse repetition frequency PRF, echo distance
To sample frequency Fs, the aerial spread speed C of electromagnetic wave, radar return data matrix;The radar return data matrix
Including distance is counted L to the slow time sampling of fast time sampling points K, orientation;
Observing scenario parameters includes:The orientation of scene and distance to sampling interval dx, dy, orientation and distance to sampled point
Plane coordinates position (the x of number M, N and scene centerc_scene,yc_scene);That is, in scene (m, n) surface units plane
Coordinate can be expressed as (xc_scene+m·dx,yc_scene+ ndy), wherein-M/2≤m < M/2 ,-N/2≤n < N/2;Each
The coordinate of the vertical direction of ground pixel is obtained by externally input DEM, or high using the average reference in image scene region
Journey href;
When the resolution ratio of SAR image is meter level or is lower, the coordinate of ground pixel vertical direction is using image scene region
Average reference elevation href, i.e., the elevation of all surface units is regarded as average reference elevation;When the resolution ratio of SAR image is excellent
When 0.2m, the coordinate of ground pixel vertical direction is provided by DEM;
Step 2:Set FFBP imaging parameters, interpolation kernel, self-focusing parameter;
FFBP imaging parameters are the umber of pulse of pulse block number BlkNum, each pulse packetSeries LevelNum and every level-one
Factoring n(i);The sub-aperture number and azimuth beam number that imaging is handled per level-one are determined by FFBP imaging parameters
Interpolation kernel is imaged method used by middle-range descriscent and orientation interpolation, wherein distance uses frequency domain to interpolation for FFBP
Zero padding interpolation, orientation use Knab interpolation;
Self-focusing parameter is the cycle-index of Autofocus processing, and 3 opened up by every level-one Autofocus processing are for depositing
The global array of storage, respectively:Global position of orientation globalPos(i), global phase gradient globalPhaseGradient(i), global integral phase globalPhase(i);
Step 3:The umber of pulse of the pulse block number BlkNum, pulse packet that are set by step 2Divide raw radar data;Read one
A pulse packet data enter step 4 after being compressed into row distance to pulse;
Step 4:The 0th grade of imaging that the pulse packet data are carried out with FFBP, subsequently enters step 5;
Carrying out the 0th grade of specific method being imaged of FFBP to pulse packet data is:
Step 4.1 establishes ground polar coordinate system with polar radius and azimuthal cosine, and the coordinate system parameters are arranged:
According to the plane coordinates at image scene center and orientation, distance to sampling number M, N and its sampling interval dx, dy obtain
To the plane coordinates of four angle points of scene, the projection of the carrier aircraft position of inertial navigation system output on the ground is recycled, is calculated each
In pulse position scene maximally away fromMinimally away from
Further according to functional expressionAcquire polar coordinate system distance dimension in the pulse position
Sampling number, wherein int indicates rounding, bin_R be original echoed signals distance to sampling interval, the distance of polar coordinate system
Sampling interval and original echoed signals it is identical;
By all pulsesValue is all set as identical, is maximum in the pulse packetValue;
The maximum cosine value MaxCos of scene and carrier aircraft heading angle theta is acquired in each pulse position(0)With minimum cosine
Value MinCos(0), and according to Azimuth beam numberObtain the included angle cosine value in each beam position:Wherein,
According to the three-dimensional coordinate of four angle points and carrier aircraft position of scene, the space length of carrier aircraft and four angle points of scene is calculated, from
And obtain the maximum oblique distance of scene in each pulse positionWith minimum oblique distancePositive side view or small strabismus
When, minimum oblique distance is nearest oblique distance of the scene to entire flight path;
Step 4.2 is to the maximum oblique distance in all pulse positions in the pulse packetWith minimum oblique distanceAsk pole
Value, obtains the maximum oblique distance of scene on the pulse packetWith minimum oblique distanceAnd accordingly in original echo
Intercepted on signal corresponding signal segment into row distance to liter sampling;
Step 4.3 is according to the calculating of step 4.1, and dimension of the scene under polar coordinate system is in each pulse position Then utilize the cosine value of azimuth beam angleMinimally away fromAcquire polar coordinates sampled point
Plane rectangular coordinates value on (m, n):
Wherein,For polar polar radius, and Just for azimuth beam angle
String value;
Height value on each sampled point passes through plane rectangular coordinatesIt is obtained in external DEM files;If without DEM
File, available reference elevation hrefInstead of height value;
Obtain the coordinate on polar coordinates sampled pointAfterwards, according to the carrier aircraft coordinate (x in each pulse position(0),y(0),z(0)), calculate distance between the twoAnd root
The signal at the sampled point is obtained on the echo data of the pulse according to the distance, to obtain the polar coordinates under the pulse position
Image;
Step 4.4 repeats step 4.1-4.3, completes polar coordinates assignment of the scene in each pulse position, obtainsIt is aThe polar coordinate image of dimension;
Step 5:The pulse packet data are carried out with the i-stage imaging of FFBP;
When sub-aperture image all in the pulse packet is all fused into a width polar coordinate image, 6 are entered step;
To pulse packet data carry out FFBP i-stage imagings specific method be:
Step 5.1 is according to i-stage factoring n(i)Update the sub-aperture center position coordinates (x of i-stage imaging(i),y(i),z(i)), while updating azimuth beam numberIt is sharedA sub-aperture, whereinIt is (i-1)-th grade of son
Aperture number;Then calculate each sub-aperture scene maximally away fromMinimally away fromDistance to sampling numberAnd the cosine value of each squint
Into row distance to a liter sampling, the dimension of the image is the image that step 5.2 obtains (i-1)-th grade
K-th of sub-aperture of step 5.3 i-stage is by (i-1)-th grade of n(i)A sub- subaperture image merges to obtain, (i-1)-th grade corresponding
Kth n(i)+ j subgraphsWherein j=0,1 ..., n(i)-1;It is first rightResampling is carried out to obtainIt indicates (i-1)-th grade of kth n(i)Under+j sub- subaperture image resamplings to k-th of sub-aperture coordinate system of i-stage
Image;
Step 5.4 judges whether i-stage will carry out Autofocus processing:If it is required, then executing autofocus algorithm processing;Otherwise,
Result after resampling is added up, is assigned to k-th of sub-aperture image of i-stage, i.e.,Wherein,It indicates
K-th of sub-aperture image of i-stage;
In step 5.4, autofocus algorithm is specific as follows:
Self-focusing step 1:All sub-aperture image classifications merged needed for sub-aperture are handled:
Self-focusing step 1.1:If the sub-aperture image of fusion is (i-1)-th grade of image overall, first sub-aperture, i.e. k=0, j
=0, then calculate global position of orientation globalPos corresponding with the sub-aperture(i), and by the corresponding global phase of the sub-aperture
Gradient globalPhaseGradient(i)Whole zero setting, while the sub-aperture image is stored in variables L astSubImg(i)In;
Self-focusing step 1 is re-executed again;
Self-focusing step 1.2:If the sub-aperture image of fusion is first for synthesizing k-th of sub-aperture image of i-stage
Subgraph, but be not global first sub-aperture, i.e. (i-1)-th grade of kth n(i)A sub-aperture, k ≠ 0, j=0, then should at this time
Sub-aperture resampling is to the aperture center of its previous sub-aperture, that is, resampling is to (i-1)-th grade of kth n(i)- 1 sub-aperture
The aperture center of diameter, as current sub-aperture image ThisSubImg(i), then execute self-focusing step 2~self-focusing step
4;
Self-focusing step 1.3:If the sub-aperture image of fusion is (i-1)-th grade of kth n(i)+ j sub- subaperture images, i.e. k ≠ 0,
J ≠ 0, by the sub-aperture image resampling to the aperture center of its previous sub-aperture, as current sub-aperture image
ThisSubImg(i), then execute self-focusing step 2~self-focusing step 4;
Self-focusing step 2:Utilize currently processed sub-aperture image ThisSubImg(i)Be stored in LastSubImg(i)In
The average constant of upper two sub-aperture phase gradient of an adjacent sub-aperture Image estimation changesSubsequently enter self-focusing step
3;
Self-focusing step 3:The phase gradient of all aperture locations corresponding to (i-1)-th grade of currently processed sub-aperture is all unified
It is assigned a value ofAnd it is recorded in global phase gradient globalPhaseGradient(i)In, whereinIt is (i-1)-th grade
The phase gradient of a upper sub-aperture;According to currently processed sub-aperture center and aperture length to global position of orientation
globalPos(i)Carry out assignment, globalPhaseGradient(i)In phase and globalPos(i)In position one it is a pair of
It answers;Utilize current ThisSubImg(i)Update LastSubImg(i)In value, while updating corresponding parameter, subsequently enter from
Focus steps 4;
Self-focusing step 4:Self-focusing step 1~self-focusing step 3 is repeated, the institute merged needed for k-th of sub-aperture of i-stage
There is subgraph to be all disposed, the subgraph after all resamplings is added up, is assigned to k-th of sub-aperture image of i-stage, i.e.,
Self-focusing step 5:After the fusion for completing k-th of sub-aperture image of i-stage, in globalPhaseGradient(i)In it is right
Phase gradient in the sub-aperture section is integrated, and result is saved in globalPhase(i)In, it is used for the benefit in the aperture
It repays;globalPhase(i)Also with globalPos(i)In position correspond;
Self-focusing step 6:Utilize globalPhase(i)Error compensation is carried out to k-th of sub-aperture of i-stage, after being compensated
Sub-aperture image;
Step 5.5 repeats step 5.1~5.4, completes the polar coordinates imaging of all sub-apertures of i-stage, obtainsWidthThe polar coordinate image of dimension;
Step 6:Judge whether to need to execute to add Autofocus processing:If so, entering step 7 after executing additional self-focusing step;
If it is not, being directly entered step 7;
Additional Autofocus processing is carried out for the finally obtained polar coordinate image of each pulse packet, it is to melt adjacent two pulse packet
Higher resolution image is synthesized, then evaluated error and is compensated on it;The reason of so doing, which is long synthetic aperture, to be helped to estimate
The error of more low frequency;
Additional the 0th cycle of self-focusing is as follows:
The polar coordinate image that first pulse packet obtains is stored in variable by step 6.1 (0)In, it calculates and is somebody's turn to do
The corresponding global position of orientation of sub-apertureAnd by the corresponding global phase gradient of the sub-apertureWhole zero setting;
The polar coordinate image resampling that step 6.2 (0) obtains next pulse block to a upper pulse packet aperture center,
It is stored in variableIn;
Step 6.3 (0) utilizes currently processed sub-aperture imageBe stored inIn it is upper
The average phase gradient in one adjacent sub-aperture Image estimation aperture, and it is recorded in global phase gradientIn;Meanwhile according to currently processed sub-aperture center and aperture length to global orientation position
It setsCarry out assignment;Phase gradient in the sub-aperture section is integrated, is stored in
In;It utilizesError compensation is carried out to the pulse packet image, it then will be currentIt is assigned toThen step 6.4 (0) is executed;
Step 6.4 (0) judges whether the 0th cycle is that last time recycles, if so, the sub-aperture figure that the pulse packet is obtained
It fastens, and is added in final image as being transformed into rectangular co-ordinate;If it is not, repeating step 6.2 (0)~step 6.4 (0), directly
To the 0th processing for recycling all pulse packets of completion;
Additional self-focusing ith cycle, and i ≠ 0 is as follows:
Step 6.1 (i) judges whether (i-1)-th grade of j-th of sub-aperture image is for merging k-th of sub-aperture image of i-stage
First sub-aperture, if so, calculate fused image dimensionThe cosine value of squintAnd the aperture center position after fusion, then execute step 6.2 (i);If it is not, directly executing step 6.2
(i);
Step 6.2 (i) is by the sub-aperture coordinate system of (i-1)-th grade of j-th of sub-aperture image resampling to k-th of sub-aperture of i-stage
Under, and the image after resampling is added in k-th of sub-aperture image of i-stage;
Step 6.3 (i) judge (i-1)-th grade of j-th of sub-aperture whether be for merge k-th of sub-aperture of i-stage the last one
Sub-aperture if it is not, enabling j=j+1, executes step 6.1 (i) if so, executing step 6.4 (i);
Step 6.4 (i) judges whether k-th of sub-aperture image of i-stage is the global first width sub-aperture image of i-stage, if so,
The image is stored in variableIn, calculate global position of orientation corresponding with the sub-aperture
And by the corresponding global phase gradient of the sub-apertureThen whole zero setting execute step 6.5 (i);
If it is not, currently processed sub-aperture image is stored inIn, then withIn sub-aperture image
Estimate the average constant variation of phase gradientTo obtainValue, integral obtainThe error is compensated, k-th of sub-aperture image of i-stage after being compensated will be current
It is assigned toThen step 6.5 (i) is executed;
Step 6.5 (i) judges whether ith cycle is that last time recycles, if so, k-th of sub-aperture image of i-stage is turned
It changes to rectangular co-ordinate to fasten, and is added in final image, if it is not, preserving the image for subsequent processing;Then, k=k is enabled
+ 1, j=j+1 repeat step 6.1 (i)~step 6.4 (i), the processing until completing (i-1)-th grade of all sub-aperture;
Step 7:The polar coordinate image obtained by step 6 is transformed into rectangular coordinate system, and the right angle that the pulse packet is obtained is sat
Logo image is added in final SAR image;
Step 8:Judge whether the pulse packet data divided by step 3 method are disposed:
If there are pulse packet data not processed, next not processed pulse packet data are read, into row distance to pulse pressure
After contracting, return to step 4;
If pulse packet data have been processed, end operation.
2. a kind of self-focusing method based on FFBP SAR imagings according to claim 1, it is characterised in that:
In step 5.3, the process of resampling is specific as follows:It calculates and is each adopted under k-th of sub-aperture polar coordinate system of i-stage
The rectangular co-ordinate of sampling pointTo calculate the kth n of each sampled point and (i-1)-th grade(i)+ j sub-aperture
Air line distance between diameter centerAnd its projector distance on groundAnd azimuthal cosineMeanwhile it can also calculate each sampled point and k-th of sub-aperture centre bit of i-stage
Air line distance between settingAccording toValue existMiddle determining range cell, then the basis in the range cellValue, the signal on each sampled point of k-th of sub-aperture of i-stage after resampling is obtained using Knab interpolation;Finally,
Signal after resampling is multiplied byTo compensate phase, here, λ is the wavelength of radar work.
3. a kind of self-focusing method based on FFBP SAR imagings according to claim 1, it is characterised in that:
In self-focusing step 2, currently processed sub-aperture image ThisSubImg is utilized(i)Be stored in LastSubImg(i)
In upper two sub-aperture phase gradient of an adjacent sub-aperture Image estimation average constant variationSpecific method be:
Self-focusing step 2.1 is in ThisSubImg(i)And LastSubImg(i)Point target, selected point target are found in image
The characteristic of point target is all necessarily exhibited in the two sub-aperture images, it is assumed that the point target number selected forPoint target
Pixel coordinate in two adjacent sub-aperture images is allWherein,
Self-focusing step 2.2 is in ThisSubImg(i)And LastSubImg(i)In image, respectively withCentered on intercept
Na×NrThe complex pattern block of size, wherein Na、NrTo points, the integral number power that value is 2 obtains respectively orientation and distance
Two plural subimage blocks:With
Self-focusing step 2.3 utilizes the method pair of frequency domain zero paddingWithCarry out two
Interpolation is tieed up, the result after interpolation is obtainedWith
Self-focusing step 2.4 existsLocation of pixels (m where middle search target peakmaxp,
nmaxp), it measures at this locationWithPhase difference, i.e.,Wherein, phase, a are asked in arg expressionsthis、alastIt is illustrated respectively in
With(m in imagemaxp,nmaxp) at complex values;
Self-focusing step 2.5 repeats self-focusing step 2.2~self-focusing step 2.4 to all point targets, obtains all target positions
The phase difference set, summation obtain average phase-differenceSo, the two sub-aperture phase gradients is flat
Equal constant variable quantityWherein, LsubFor the position of orientation number of (i-1)-th grade of sub-aperture.
4. a kind of self-focusing method based on FFBP SAR imagings according to claim 1, it is characterised in that:
In self-focusing step 6, to the error compensation of k-th of sub-aperture of i-stage, the specific step of the sub-aperture image after being compensated
Suddenly it is:
Self-focusing step 6.1:Two-dimensional Fourier transform is carried out to k-th of sub-aperture of i-stage;
Self-focusing step 6.2:According to the aperture location of the sub-aperture from globalPhase(i)The phase of middle interception corresponding position
Phase(i)For compensating, while the offset of envelope is compensated, the formula of compensation is:
Wherein,It is2-d spectrum,It is the 2-d spectrum after compensation, β is the downwards angle of visibility of the sub-aperture, and C is electricity
The aerial spread speed of magnetic wave, frIt is distance to frequency;
Self-focusing step 6.3:To the 2-d spectrum after compensationTwo-dimentional inverse Fourier transform is carried out, is completed to i-stage kth
The compensation of a sub-aperture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610177551.4A CN105842694B (en) | 2016-03-23 | 2016-03-23 | A kind of self-focusing method based on FFBP SAR imagings |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610177551.4A CN105842694B (en) | 2016-03-23 | 2016-03-23 | A kind of self-focusing method based on FFBP SAR imagings |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105842694A CN105842694A (en) | 2016-08-10 |
CN105842694B true CN105842694B (en) | 2018-10-09 |
Family
ID=56583381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610177551.4A Active CN105842694B (en) | 2016-03-23 | 2016-03-23 | A kind of self-focusing method based on FFBP SAR imagings |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105842694B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ES2699723B2 (en) | 2016-06-24 | 2020-10-16 | Kt Corp | METHOD AND APPARATUS TO TREAT A VIDEO SIGNAL |
CN107239740B (en) * | 2017-05-05 | 2019-11-05 | 电子科技大学 | A kind of SAR image automatic target recognition method of multi-source Fusion Features |
CN107544068A (en) * | 2017-07-14 | 2018-01-05 | 电子科技大学 | A kind of image area synthetic wideband method based on frequency domain BP |
CN107272000B (en) * | 2017-07-20 | 2020-08-07 | 中国科学院电子学研究所 | Method for calculating azimuth phase error of sliding scattering center |
CN108205135B (en) * | 2018-01-22 | 2022-03-04 | 西安电子科技大学 | Radar video imaging method based on non-interpolation fusion fast backward projection |
CN110095775B (en) * | 2019-04-29 | 2023-03-14 | 西安电子科技大学 | Hybrid coordinate system-based bump platform SAR (synthetic Aperture Radar) rapid time domain imaging method |
CN110554385B (en) * | 2019-07-02 | 2022-10-28 | 中国航空工业集团公司雷华电子技术研究所 | Self-focusing imaging method and device for maneuvering trajectory synthetic aperture radar and radar system |
CN110609282B (en) * | 2019-09-19 | 2020-11-17 | 中国人民解放军军事科学院国防科技创新研究院 | Terahertz aperture coding three-dimensional imaging method and device based on back projection |
CN111537999B (en) * | 2020-03-04 | 2023-06-30 | 云南电网有限责任公司电力科学研究院 | Robust and efficient decomposition projection automatic focusing method |
CN111736151B (en) * | 2020-06-16 | 2022-03-04 | 西安电子科技大学 | Improved FFBP imaging method for efficient global rectangular coordinate projection fusion |
CN111999734B (en) * | 2020-08-28 | 2022-02-08 | 中国电子科技集团公司第三十八研究所 | Broadband strabismus bunching SAR two-step imaging method |
CN112184643B (en) * | 2020-09-21 | 2022-11-08 | 北京理工大学 | Non-parametric SAR image self-adaptive resampling method |
CN112946645B (en) * | 2021-01-29 | 2022-10-14 | 北京理工大学 | Unmanned aerial vehicle-mounted ultra-wideband SAR self-focusing method |
CN113221062B (en) * | 2021-04-07 | 2023-03-28 | 北京理工大学 | High-frequency motion error compensation method of small unmanned aerial vehicle-mounted BiSAR system |
CN113176570B (en) * | 2021-04-21 | 2022-11-15 | 北京航空航天大学 | Squint SAR time domain imaging self-focusing method |
CN113552564B (en) * | 2021-06-23 | 2022-10-25 | 南昌大学 | SAR time domain rapid imaging method, system, terminal and application for complex terrain scene |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101430380A (en) * | 2008-12-19 | 2009-05-13 | 北京航空航天大学 | Large slanting view angle machine-carried SAR beam bunching mode imaging method based on non-uniform sampling |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008021374A2 (en) * | 2006-08-15 | 2008-02-21 | General Dynamics Advanced Information Systems, Inc | Methods for two-dimensional autofocus in high resolution radar systems |
-
2016
- 2016-03-23 CN CN201610177551.4A patent/CN105842694B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101430380A (en) * | 2008-12-19 | 2009-05-13 | 北京航空航天大学 | Large slanting view angle machine-carried SAR beam bunching mode imaging method based on non-uniform sampling |
Non-Patent Citations (3)
Title |
---|
"Considerations for autofocus of spotlight-mode SAR imagery created using a beamforming algorithm";C.V.Jakowatz等;《Proceedings of SPIE》;20090429;第1-9页 * |
"Synthetic-aperture radar processing using fast factorized back-projection";LMH Ulander等;《IEEE transactions on aerospace and electronic systems》;20031231;第39卷(第3期);第760-776页 * |
"快速分解后向投影SAR成像的自聚焦算法研究";李浩林等;《电子与信息学报》;20140430;第36卷(第4期);第938-945页 * |
Also Published As
Publication number | Publication date |
---|---|
CN105842694A (en) | 2016-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105842694B (en) | A kind of self-focusing method based on FFBP SAR imagings | |
CN106249237B (en) | Big Squint SAR frequency domain imaging method under a kind of curvilinear path | |
CN107229048B (en) | High-resolution wide-range SAR moving target speed estimation and imaging method | |
CN108205135B (en) | Radar video imaging method based on non-interpolation fusion fast backward projection | |
CN105759263B (en) | A kind of spaceborne Squint SAR radar imaging method under high-resolution large scene | |
CN103091674B (en) | Space target high resolution imaging method based on high resolution range profile (HRRP) sequence | |
CN111856461B (en) | Improved PFA-based bunching SAR imaging method and DSP implementation thereof | |
CN105652271B (en) | A kind of Lagrangian real Beam radar angle super-resolution processing method of augmentation | |
CN108957452A (en) | A kind of adaptive FFBP imaging method of synthetic aperture radar | |
CN104251991A (en) | Fractal dimension threshold iteration sparse microwave imaging method based on sparseness estimation | |
CN107356923A (en) | A kind of ISAR based on sub-aperture division is imaged envelope alignment method | |
CN114545411A (en) | Polar coordinate format multimode high-resolution SAR imaging method based on engineering realization | |
CN112415515A (en) | Method for separating targets with different heights by airborne circular track SAR | |
Li et al. | A novel CFFBP algorithm with noninterpolation image merging for bistatic forward-looking SAR focusing | |
CN102043142A (en) | Polar coordinate wave-front curvature compensation method of synthetic aperture radar based on digital spotlight | |
CN110109104B (en) | Array SAR (synthetic aperture radar) equidistant slice imaging geometric distortion correction method | |
CN106990397A (en) | Bistatic Forward-looking SAR nonsystematic range migration correction method | |
CN103792534B (en) | SAR two-dimension autofocus method based on prior phase structure knowledge | |
CN113608218B (en) | Frequency domain interference phase sparse reconstruction method based on back projection principle | |
CN103076608A (en) | Contour-enhanced beaming-type synthetic aperture radar imaging method | |
CN111127334B (en) | SAR image real-time geometric correction method and system based on RD plane pixel mapping | |
JP7006781B2 (en) | Synthetic Aperture Radar Signal Analysis Device, Synthetic Aperture Radar Signal Analysis Method and Synthetic Aperture Radar Signal Analysis Program | |
CN115712095A (en) | SAR satellite three-dimensional positioning error correction method and system based on single angular reflection | |
Lagovsky et al. | Image restoration of the multiple target by smart antenna array radiating UWB signals | |
Zhou et al. | A Novel Method of Three-Dimensional Geometry Reconstruction of Space Targets Based on the ISAR Image Sequence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |