WO2014011287A2 - Wide beam sar focusing method using navigation solution derived from autofocus data - Google Patents

Wide beam sar focusing method using navigation solution derived from autofocus data Download PDF

Info

Publication number
WO2014011287A2
WO2014011287A2 PCT/US2013/036466 US2013036466W WO2014011287A2 WO 2014011287 A2 WO2014011287 A2 WO 2014011287A2 US 2013036466 W US2013036466 W US 2013036466W WO 2014011287 A2 WO2014011287 A2 WO 2014011287A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
navigation
profile
autofocus
image block
Prior art date
Application number
PCT/US2013/036466
Other languages
French (fr)
Other versions
WO2014011287A3 (en
Inventor
Michael Yih-hwa JIN
Original Assignee
Raytheon Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raytheon Company filed Critical Raytheon Company
Priority to EP13785966.6A priority Critical patent/EP2867616A2/en
Priority to JP2015520176A priority patent/JP6072240B2/en
Publication of WO2014011287A2 publication Critical patent/WO2014011287A2/en
Publication of WO2014011287A3 publication Critical patent/WO2014011287A3/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9004SAR image acquisition techniques
    • G01S13/9019Auto-focussing of the SAR signals

Definitions

  • One or more aspects of embodiments according to the present invention relate to improving the quality of synthetic aperture radar images and more particularly to methods for generating navigation profiles resulting in improved image quality.
  • Ground penetration radar typically operates at low frequency because higher frequencies have a significantly diminished ability to penetrate the ground.
  • the low frequency i.e., long wavelength, results in a wide antenna pattern.
  • SAR synthetic aperture radar
  • Embodiments of the present invention provide methods for processing SAR and navigation data to produce images of improved quality.
  • the present invention provides methods for generating navigation profiles resulting in improved SAR images.
  • Embodiments of the present invention also provide the particular benefit of generating navigation profiles resulting in SAR images with reduced discontinuities at the boundaries between image blocks in the images.
  • a method for forming a synthetic aperture radar (SAR) image, from SAR data and a navigation profile comprising: generating a first SAR image from the SAR data and the navigation profile; dividing the SAR image into image blocks; selecting a subset of the image blocks; applying an autofocus algorithm to each image block to form a phase error profile estimate for the image block; generating a navigation error profile estimate from the phase error profile estimates; and generating a second SAR image from the SAR data, the navigation profile, and the navigation error profile estimate.
  • SAR synthetic aperture radar
  • the generating a navigation error profile estimate comprises: approximating the navigation error profile estimate as a vector of three low-order
  • the measure of performance is the weighted summed norm of the phase error, taken over a set of selected image blocks.
  • an image block is selected on the basis of having image contrast exceeding a predetermined threshold.
  • the weight, for a selected image block, used in calculating the weighted summed norm of the phase error equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
  • the generating a navigation error profile estimate comprises: selecting a set of image blocks; and finding, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the navigation error profile, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
  • the selected image blocks are selected on the basis of having image contrast exceeding a predetermined threshold.
  • the weight, for a selected image block, used in calculating the weighted sum is a function of the image contrast before autofocus for that image block and the image contrast after autofocus for that image block.
  • the weight, for a selected image block, used in calculating the weighted sum equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
  • the generating a navigation error profile estimate comprises: writing an intermediate navigation error profile estimate as a vector of three low-order polynomials in time; searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the intermediate navigation error profile estimate; forming an intermediate navigation profile and an intermediate image block phase error profile using the set of values of the polynomial coefficients; selecting a set of image blocks; and using the intermediate navigation profile and the intermediate image block phase error profile to find, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the point in space, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
  • a method for forming an improved navigation profile from a coarse navigation profile and from a synthetic aperture radar (SAR) image comprising: generating a first SAR image from the SAR data and the navigation profile; dividing the SAR image into image blocks; selecting a subset of the image blocks; applying an autofocus algorithm to each image block to form a phase error profile estimate for the image block; and generating an improved navigation profile from the coarse navigation profile and the phase error profile estimates.
  • SAR synthetic aperture radar
  • the generating an improved navigation profile comprises: approximating a navigation error profile estimate as a vector of three low-order polynomials in time; searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the navigation error profile estimate; and correcting the coarse navigation profile using the navigation error profile estimate.
  • the measure of performance is the weighted summed norm of the phase error, taken over a set of selected image blocks.
  • an image block is selected on the basis of having image contrast exceeding a predetermined threshold.
  • the weight, for a selected image block, used in calculating the weighted summed norm of the phase error is a function of the image contrast before autofocus for that image block and the image contrast after autofocus for that image block.
  • the weight, for a selected image block, used in calculating the weighted summed norm of the phase error equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
  • the generating an improved navigation profile comprises: selecting a set of image blocks; and finding, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the point in space, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
  • an image block is selected on the basis of having image contrast exceeding a predetermined threshold.
  • the weight for an image block in the weighted sum equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
  • the generating an improved navigation profile from the coarse navigation profile and the phase error profile estimates comprises: approximating an intermediate navigation error profile estimate as a vector of three low-order polynomials in time; searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the intermediate navigation error profile estimate; forming an intermediate navigation profile and an intermediate image block phase error profile using the set of values of the polynomial coefficients; selecting a set of image blocks; and using the intermediate navigation profile and the intermediate image block phase error profile to find, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the point in space, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
  • FIG. 1 is an illustration of an aircraft's actual and estimated path in relation to an area being imaged using SAR imaging;
  • FIG. 2 is a flowchart of a method for producing an improved navigation profile according to an embodiment of the present invention
  • FIG. 3 is a diagram illustrating the geometry of an aircraft's true and estimated position at one point in time according to an embodiment of the present invention.
  • FIG. 4 is a flowchart and data flow diagram, showing an improved SAR signal processing flow according to an embodiment of the present invention.
  • an aircraft flying along a path known as the actual aircraft path 10 may illuminate a first target 12 and a second target 14 in an image frame 16.
  • a sensing system aboard the aircraft such as a global positioning system combined with inertial sensors (GPS-INS), may form an estimate of the actual aircraft path 10; this estimate may be known as the navigation profile, or the knowledge aircraft path 20.
  • the target phase profile for each target is determined by the range between that target and the actual aircraft path 10.
  • the corresponding phase error as a function of time for any target is known as the phase error profile for that target.
  • This error may yield a defocused image.
  • the significant angle between targets, as seen from the aircraft position results in their phase error profiles being different. Therefore, if two targets such as the first target 12 and the second target 14 are widely separated in angle, it may not be possible to refocus them using the same phase error profile.
  • One prior art solution to this angle dependent focusing problem involves dividing the image frame into multiple smaller image blocks, targets within which are sufficiently close together that they subtend relatively small angles at the aircraft.
  • sixteen such image blocks may be used.
  • Independent autofocus solutions are derived from these image blocks. This approach has been found useful for images with modest to medium defocus. For images with a higher degree of defocus, the image resulting from performing the autofocus process on each image block often reveals image discontinuities at the boundaries between image blocks. Such discontinuities in an image may affect the performance of subsequent image exploitation processes such as automated target recognition (ATR) and change detection (CD).
  • ATR automated target recognition
  • CD change detection
  • the above-mentioned image discontinuity may be caused by the combined effect of two factors; the image block size and the target quality within the image block.
  • a larger image block often leads to a more accurate phase error estimate because there is a higher probability that a larger image block contains point-like targets with enough strength for an autofocus algorithm to produce an accurate phase error profile.
  • a larger image block is associated with a coarse quantization of the phase error that often leads to image discontinuity.
  • a small image block is associated with a fine quantization of the phase error, but it often leads to poor phase error estimates due to a lower probability of containing quality targets.
  • An autofocus algorithm such as that disclosed in the '548 Application, may be used to extract the phase error profile of a defocused point-like target in a SAR image from the complex data corresponding to the target.
  • This phase error profile consists of high order polynomial terms from the second order and above. Theoretically, the zeroth order phase error is related to the target range location error. The first order term is related to target azimuth location error.
  • An autofocus process may be used only to sharpen the target impulse response; it does not change the locations of the image pixels.
  • an improved navigation profile is formed using phase error profiles obtained from an autofocus process.
  • the SAR data is then reprocessed with this improved navigation profile to generate a well focused image free of image discontinuities.
  • the navigation profile is improved in the sense that it results in superior SAR images, and not necessarily in any other sense. Because the quality of the SAR image depends only on phase errors of certain orders, and is insensitive to others, the maximum discrepancy between the actual and estimated aircraft paths need not be reduced with the improvement; indeed, it may be increased.
  • the z-axis is then chosen to be vertical, and y-axis is chosen to be perpendicular to both the x-axis and the z-axis, i.e., the y-axis is in the cross-track direction at the reference point.
  • the navigation error profile be AR AC (t).
  • the phase error profile ⁇ j(t) for a target located at the center of the (i,j) th image block is related to the navigation error profile AR AC (t) by
  • is the wavelength of the radar radiation. Only the second and higher order terms of s6y (t) affect image focus. We denote ⁇ ' ⁇ ⁇ ( ⁇ ) as this high order part of the phase error profile ⁇ ), i.e., the phase error profile with its zeroth and first order terms removed.
  • the summed norm of the phase error may be minimized by approximating the navigation error profile as a vector of low-order polynomials, and searching the space of polynomial coefficient values. These values may also be known as levels. In one
  • the polynomial approximation for the navigation error may be
  • the search may be conducted over a range of first, second, and third order errors in ⁇ , second and third order errors in Ay, and second and third order errors in ⁇ .
  • the search for the optimal set of coefficients %, a 2 , ... a 7 may be done for one coefficient at a time, identified in FIG. 2 by the index j.
  • a range of values may be tested, indexed by i; this range may be the pulling range for that coefficient, i.e., the range of values the coefficient may reasonably take, given the typical characteristics of the navigation error profile.
  • the summed norm of the phase error may be compared to the lowest previously obtained, and if it represents an improvement, the new lowest value may be stored along with the index i, or, equivalently, along with the corresponding value of the polynomial coefficient being tested.
  • an exemplary method may begin by setting, in step 50, initial values of the search indices i and j.
  • i may index different levels or values of the coefficient being optimized in an inner loop
  • j may select which coefficient is optimized in the inner loop.
  • a range error profile may be generated for each image block.
  • the set of range error profiles may be converted to a set of phase error profiles. Steps 52 and 54 may be omitted for any image block for which the weight W; j is zero.
  • the summed norm of the phase error may be calculated, and if, as determined by step 58, it is better, i.e., lower, than the best obtained so far, it may, in conditionally executed step 60, be saved, along with the index i which identifies the value resulting in the current best summed norm of the phase error.
  • the coefficient value may be saved, instead of its index i.
  • steps 62 and 64 the index i is advanced and execution returns to the start of the loop, unless all values of i have been tested, in which case the outer loop index is advanced in step 68. Outer loop completion is tested in step 70; the outer loop repeats until all desired values of its index, j, have been tested.
  • an improved navigation profile may be formed as R AC (t)— AR AC (t), i.e., by correcting the original coarse navigation profile with the estimated navigation profile error.
  • improved image block phase error profiles may be formed as ⁇ ⁇ (
  • phase error profiles obtained from autofocus are in general distorted by noise in the SAR image pixels.
  • the phase error has the least distortion.
  • image blocks without discrete targets lead to very poor phase error profiles.
  • the quality of the phase error profiles generated by the autofocus algorithm is determined by the discrete targets in each image block. It is proportional to the number, sharpness, and brightness of the discrete targets.
  • a convenient measure of the quality of the phase error profile is the contrast of the processed image block.
  • the weighting coefficients W to be used in calculating the summed norm of the phase error, and in extracting the navigation error profile are obtained in two steps.
  • the number of image blocks exceeds the number of degrees of freedom in the navigation error, and a fraction of the image blocks with lower image contrast may be ignored, i.e., assigned weights of zero.
  • the number of image blocks assigned nonzero weights must be greater than the number of degrees of freedom in the navigation error and, to avoid performing more computations than necessary, may preferably be less than 30% of the total number of image blocks.
  • Wy is again the weight for the phase error of the (i,j) th image block
  • yQy is the image contrast before auto focus
  • yy is the image contrast after autofocus. This weight is thus based on the magnitude of the normalized contrast improvement, where contrast is defined in the customary manner, as where A T is the magnitude of the i th image pixel.
  • the navigation error profile and, as a result, an improved navigation profile, may be obtained one point at a time using a least squares fit.
  • Afly (t) — ⁇ j(t).
  • the true distance from the actual aircraft position to the center 100 of image block (i,j) differs from the knowledge distance by R t (t).
  • the true distance is given by
  • phase error is known at any time then one may conclude that at that time the aircraft position lies on a sphere centered at Pi M (i,j) with a
  • i j De the unit vector 104 from the center 100 of image block (i,j) to the knowledge aircraft position 102 at time t.
  • phase error profiles generated by the autofocus algorithm will not be perfectly free of errors; in this case a solution which most nearly satisfies, in a least-squares sense, all of the equations in this overdetermined system of equations can be obtained by a weighted least squares method. Let the distance from the aircraft position estimate to the plane normal be d x . It can be shown that
  • R AC (t) may then be found as the solution of the following matrix equation, which minimizes this error in a weighted least-squares sense:
  • K AC (t) may then be used to generate an improved SAR image.
  • the efficiency may be enhanced by reducing the processing times involved in the first image formation and in the autofocus process.
  • the number of phase error profiles need not be large: between four and six phase error profiles should be sufficient. This means that we need only process four to six image blocks in the first image formation process. However, the whole image must be processed to determine which image blocks will provide high quality phase error profiles.
  • This may be accomplished efficiently by forming the first SAR image using a backprojection algorithm with a lower image pixel sampling rate. Reducing the sampling rate by a factor of 2 or 3 in each dimension may reduce the image formation time to 25% or 11%, respectively, compared to the time required at full sampling rate, but the image generated using a reduced sampling rate is adequate for an indication of local image contrast.
  • autofocus is applied to generate the phase error profiles. Using this approach, the overall computational efficiency of SAR signal processing, according to embodiments of the present invention, can be increased significantly.
  • an initial backprojection may be performed with a reduced sample rate, to generate a SAR image. Because of the reduced sample rate, this backprojection step 150 may incur considerably less computational load and delay than a backprojection at full resolution.
  • the contrast is computed over all of the image blocks in the image, and then, in step 154, image blocks with adequate contrast to be used in the following steps are identified and selected.
  • a backprojection is then performed, in step 156, with full sample rate, for these selected image blocks.
  • the backprojection again incurs a reduced computational burden, in this case as a result of being performed over only a subset of the image blocks.
  • an autofocus algorithm is used to generate a phase error profile for each image block.
  • step 160 is optional.
  • step 160 if it is performed, a polynomial approximation for the navigation error profile is used to find an improved navigation solution.
  • this step also is optional.
  • step 162 if it is performed, a least squares method is used to find the navigation path which is the best fit, in a weighted least square sense, to the phase and range errors corresponding to each of the image blocks selected in step 154. If both steps 160 and 162 are performed, then the intermediate navigation profile and the intermediate image block phase error profiles resulting from step 160 are fed into step 162. Once an improved navigation solution has been found by one or both of steps 160 and 162, backprojection with full sample rate over the entire image is used, in step 164, to obtain the final SAR image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)
  • Automation & Control Theory (AREA)

Abstract

An algorithm for deriving improved navigation data from the autofocus results obtained from selected image blocks in a wide-beam synthetic aperture radar (SAR) image. In one embodiment the navigation error may be approximated with a vector of low-order polynomials, and a set of polynomial coefficients found which results in a good phase error match with the autofocus results. In another embodiment, a least squares solution may be found for the system of equations relating the phase errors at a point in time for selected image blocks to the navigation error vector at that point in time. An approach using low sample rate backprojection (150) initially to select suitable image blocks, and full sample rate backprojection (156) for the selected image blocks, followed by full sample rate backprojection (164) for the image, using the improved navigation solution, may be used to reduce the computational load of employing the algorithm.

Description

WIDE BEAM SAR FOCUSING METHOD USING NAVIGATION SOLUTION
DERIVED FROM AUTOFOCUS DATA
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application is related to and incorporates by reference in its entirety, as if set forth in full, U.S. Patent Application Ser. No. 13/347,548, filed on January 10, 2012 ("the '548 Application").
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
[0002] This invention was made with U.S. Government support under contract No.
H94003-04-D-0006 awarded by the Department of Defense. The U.S. Government has certain rights in this invention.
BACKGROUND
1. Field
[0003] One or more aspects of embodiments according to the present invention relate to improving the quality of synthetic aperture radar images and more particularly to methods for generating navigation profiles resulting in improved image quality.
2. Description of Related Art
[0004] Ground penetration radar (GPR) typically operates at low frequency because higher frequencies have a significantly diminished ability to penetrate the ground. When operated from a platform, such as an aircraft, capable of carrying only a small antenna, the low frequency, i.e., long wavelength, results in a wide antenna pattern.
[0005] The use of synthetic aperture radar (SAR) makes it possible to generate radar images with high spatial resolution using a small antenna on a moving platform. SAR signal processing techniques accomplish this by combining the signals received by the moving antenna at various points along its path in such a way as to simulate the operation of a larger antenna, having dimensions comparable to those of the path of the small antenna.
[0006] To construct sharp SAR images it is necessary to have more accurate information about the path of the antenna than may be available, for example, from a path estimator based on a global positioning system receiver and on an inertial navigation system (a GPS/INS system). Prior art systems may generate improved images using an autofocus algorithm after first dividing the image into sub-images, or image blocks, and then finding values of the second order range corrections for which the focus is best. These systems may, if large image blocks are used, suffer from discontinuities at the boundaries between image blocks. If small image blocks are used, then some may contain low quality targets and the results of the autofocus algorithm may be poor.
[0007] Therefore, there is a need for a new solution to refocus SAR images. SUMMARY
[0008] Embodiments of the present invention provide methods for processing SAR and navigation data to produce images of improved quality. In particular, the present invention provides methods for generating navigation profiles resulting in improved SAR images. Embodiments of the present invention also provide the particular benefit of generating navigation profiles resulting in SAR images with reduced discontinuities at the boundaries between image blocks in the images.
[0009] According to an embodiment of the present invention there is provided a method for forming a synthetic aperture radar (SAR) image, from SAR data and a navigation profile, the method comprising: generating a first SAR image from the SAR data and the navigation profile; dividing the SAR image into image blocks; selecting a subset of the image blocks; applying an autofocus algorithm to each image block to form a phase error profile estimate for the image block; generating a navigation error profile estimate from the phase error profile estimates; and generating a second SAR image from the SAR data, the navigation profile, and the navigation error profile estimate.
[0010] In one embodiment, the generating a navigation error profile estimate comprises: approximating the navigation error profile estimate as a vector of three low-order
polynomials in time and searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the navigation error profile estimate.
[0011] In one embodiment, the measure of performance is the weighted summed norm of the phase error, taken over a set of selected image blocks.
[0012] In one embodiment, an image block is selected on the basis of having image contrast exceeding a predetermined threshold.
[0013] In one embodiment, the weight, for a selected image block, used in calculating the weighted summed norm of the phase error, equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
[0014] In one embodiment, the generating a navigation error profile estimate comprises: selecting a set of image blocks; and finding, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the navigation error profile, and the phase errors for the selected image blocks resulting from the autofocus algorithm. [0015] In one embodiment, the selected image blocks are selected on the basis of having image contrast exceeding a predetermined threshold.
[0016] In one embodiment, the weight, for a selected image block, used in calculating the weighted sum, is a function of the image contrast before autofocus for that image block and the image contrast after autofocus for that image block.
[0017] In one embodiment, the weight, for a selected image block, used in calculating the weighted sum, equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
[0018] In one embodiment, the generating a navigation error profile estimate comprises: writing an intermediate navigation error profile estimate as a vector of three low-order polynomials in time; searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the intermediate navigation error profile estimate; forming an intermediate navigation profile and an intermediate image block phase error profile using the set of values of the polynomial coefficients; selecting a set of image blocks; and using the intermediate navigation profile and the intermediate image block phase error profile to find, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the point in space, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
[0019] According to an embodiment of the present invention there is provided a method for forming an improved navigation profile from a coarse navigation profile and from a synthetic aperture radar (SAR) image, the method comprising: generating a first SAR image from the SAR data and the navigation profile; dividing the SAR image into image blocks; selecting a subset of the image blocks; applying an autofocus algorithm to each image block to form a phase error profile estimate for the image block; and generating an improved navigation profile from the coarse navigation profile and the phase error profile estimates.
[0020] In one embodiment, the generating an improved navigation profile comprises: approximating a navigation error profile estimate as a vector of three low-order polynomials in time; searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the navigation error profile estimate; and correcting the coarse navigation profile using the navigation error profile estimate.
[0021] In one embodiment, the measure of performance is the weighted summed norm of the phase error, taken over a set of selected image blocks.
[0022] In one embodiment, an image block is selected on the basis of having image contrast exceeding a predetermined threshold. [0023] In one embodiment, the weight, for a selected image block, used in calculating the weighted summed norm of the phase error, is a function of the image contrast before autofocus for that image block and the image contrast after autofocus for that image block.
[0024] In one embodiment, the weight, for a selected image block, used in calculating the weighted summed norm of the phase error, equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
[0025] In one embodiment, the generating an improved navigation profile comprises: selecting a set of image blocks; and finding, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the point in space, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
[0026] In one embodiment, an image block is selected on the basis of having image contrast exceeding a predetermined threshold.
[0027] In one embodiment, the weight for an image block in the weighted sum, equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
[0028] In one embodiment, the generating an improved navigation profile from the coarse navigation profile and the phase error profile estimates comprises: approximating an intermediate navigation error profile estimate as a vector of three low-order polynomials in time; searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the intermediate navigation error profile estimate; forming an intermediate navigation profile and an intermediate image block phase error profile using the set of values of the polynomial coefficients; selecting a set of image blocks; and using the intermediate navigation profile and the intermediate image block phase error profile to find, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the point in space, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] Features, aspects, and embodiments are described in conjunction with the attached drawings, in which:
[0030] FIG. 1 is an illustration of an aircraft's actual and estimated path in relation to an area being imaged using SAR imaging; [0031] FIG. 2 is a flowchart of a method for producing an improved navigation profile according to an embodiment of the present invention;
[0032] FIG. 3 is a diagram illustrating the geometry of an aircraft's true and estimated position at one point in time according to an embodiment of the present invention; and
[0033] FIG. 4 is a flowchart and data flow diagram, showing an improved SAR signal processing flow according to an embodiment of the present invention.
DETAILED DESCRIPTION
[0034] The detailed description set forth below in connection with the appended drawings is intended as a description of the presently preferred embodiments, of a wide beam SAR focusing method using a navigation solution derived from autofocus data, provided in accordance with the present invention, and is not intended to represent the only forms in which the present invention may be constructed or utilized. The description sets forth the features of the present invention in connection with the illustrated embodiments. It is to be understood, however, that the same or equivalent functions and structures may be
accomplished by different embodiments that are also intended to be encompassed within the spirit and scope of the invention. As denoted elsewhere herein, like element numbers are intended to indicate like elements or features.
[0035] In a wide beam SAR system, image defocusing varies spatially across the image because target range error is angle dependent. Referring to FIG. 1 , an aircraft (not shown) flying along a path known as the actual aircraft path 10 may illuminate a first target 12 and a second target 14 in an image frame 16. A sensing system aboard the aircraft, such as a global positioning system combined with inertial sensors (GPS-INS), may form an estimate of the actual aircraft path 10; this estimate may be known as the navigation profile, or the knowledge aircraft path 20. The target phase profile for each target is determined by the range between that target and the actual aircraft path 10. Error in the knowledge aircraft path 20, i.e., any discrepancy between the knowledge aircraft path 20 and the actual aircraft path 10, leads to a range error 22 to the first target 12, and a range error 24 to the second target 14, at any point in time. The corresponding phase error as a function of time for any target is known as the phase error profile for that target. This error may yield a defocused image. In a wide beam SAR, the significant angle between targets, as seen from the aircraft position, results in their phase error profiles being different. Therefore, if two targets such as the first target 12 and the second target 14 are widely separated in angle, it may not be possible to refocus them using the same phase error profile.
[0036] One prior art solution to this angle dependent focusing problem involves dividing the image frame into multiple smaller image blocks, targets within which are sufficiently close together that they subtend relatively small angles at the aircraft. In one embodiment, sixteen such image blocks may be used. Independent autofocus solutions are derived from these image blocks. This approach has been found useful for images with modest to medium defocus. For images with a higher degree of defocus, the image resulting from performing the autofocus process on each image block often reveals image discontinuities at the boundaries between image blocks. Such discontinuities in an image may affect the performance of subsequent image exploitation processes such as automated target recognition (ATR) and change detection (CD).
[0037] The above-mentioned image discontinuity may be caused by the combined effect of two factors; the image block size and the target quality within the image block. A larger image block often leads to a more accurate phase error estimate because there is a higher probability that a larger image block contains point-like targets with enough strength for an autofocus algorithm to produce an accurate phase error profile. However, a larger image block is associated with a coarse quantization of the phase error that often leads to image discontinuity. On the other hand, a small image block is associated with a fine quantization of the phase error, but it often leads to poor phase error estimates due to a lower probability of containing quality targets.
[0038] An autofocus algorithm, such as that disclosed in the '548 Application, may be used to extract the phase error profile of a defocused point-like target in a SAR image from the complex data corresponding to the target. This phase error profile consists of high order polynomial terms from the second order and above. Theoretically, the zeroth order phase error is related to the target range location error. The first order term is related to target azimuth location error. In a processed SAR image, there is no way to identify the location error without the use of the ground truth. An autofocus process may be used only to sharpen the target impulse response; it does not change the locations of the image pixels.
[0039] In one embodiment of the present invention, an improved navigation profile, or, equivalently, an estimate of the navigation error profile, is formed using phase error profiles obtained from an autofocus process. The SAR data is then reprocessed with this improved navigation profile to generate a well focused image free of image discontinuities. It is useful to note that the navigation profile is improved in the sense that it results in superior SAR images, and not necessarily in any other sense. Because the quality of the SAR image depends only on phase errors of certain orders, and is insensitive to others, the maximum discrepancy between the actual and estimated aircraft paths need not be reduced with the improvement; indeed, it may be increased.
[0040] Let the knowledge aircraft path be RAC(t) = [xAc(t) YAC ( ) ZAC(X)]t, and the image block center positions be P!M(U) = [XIM(U) Ymi i) ZIM0,D]t. It is customary, although not necessary, to define the coordinate system with the x-axis tangential to the aircraft's velocity at some reference point along its path, such as the center of the track along which SAR data are taken, or at the beginning of that track. The z-axis is then chosen to be vertical, and y-axis is chosen to be perpendicular to both the x-axis and the z-axis, i.e., the y-axis is in the cross-track direction at the reference point.
[0041] Let the navigation error profile be ARAC(t). The phase error profile θ} j(t) for a target located at the center of the (i,j)th image block is related to the navigation error profile ARAC(t) by
[0042] εθ1(,ω = y [||RAC(t) - ΡΙΜ Ο, Ο ΙΙ ~ || (RAC(t) - ARAC(t)) - PIM(i,j) ||],
[0043] where λ is the wavelength of the radar radiation. Only the second and higher order terms of s6y (t) affect image focus. We denote εθ'} }(ΐ) as this high order part of the phase error profile εθχ^ ), i.e., the phase error profile with its zeroth and first order terms removed.
[0044] We seek a navigation error profile resulting in phase error profiles for the image blocks which are in good agreement with the phase error profiles, εψ¾ j (t), obtained from the autofocus process. The extent to which such agreement exists may be quantified by the summed norm of the phase error, defined as νυΙΙεψ,,,ω - εθ'ί,
Figure imgf000008_0001
where the Wy are suitably chosen weights, and I and J are the dimensions of the image in units of image blocks. Generally, the lower the summed norm of the phase error, the better the quality of the SAR image generated using the corresponding navigation error profile will be.
[0045] The summed norm of the phase error may be minimized by approximating the navigation error profile as a vector of low-order polynomials, and searching the space of polynomial coefficient values. These values may also be known as levels. In one
embodiment, the polynomial approximation for the navigation error may be
ARAC (t) = -, and T is the synthetic
Figure imgf000008_0002
aperture time.
i.e., the search may be conducted over a range of first, second, and third order errors in Δχ, second and third order errors in Ay, and second and third order errors in Δζ.
[0046] Referring to FIG. 2, the search for the optimal set of coefficients %, a2, ... a7 may be done for one coefficient at a time, identified in FIG. 2 by the index j. For each coefficient a range of values may be tested, indexed by i; this range may be the pulling range for that coefficient, i.e., the range of values the coefficient may reasonably take, given the typical characteristics of the navigation error profile. For each trial point in the space of coefficient values, the summed norm of the phase error may be compared to the lowest previously obtained, and if it represents an improvement, the new lowest value may be stored along with the index i, or, equivalently, along with the corresponding value of the polynomial coefficient being tested. [0047] In particular, an exemplary method may begin by setting, in step 50, initial values of the search indices i and j. Here, i may index different levels or values of the coefficient being optimized in an inner loop, and j may select which coefficient is optimized in the inner loop. For example, j=l may correspond to optimizing ¾, j=2 may correspond to optimizing a2, and so on. In a subsequent step 52, a range error profile may be generated for each image block. In step 54, the set of range error profiles may be converted to a set of phase error profiles. Steps 52 and 54 may be omitted for any image block for which the weight W; j is zero. In step 56, the summed norm of the phase error may be calculated, and if, as determined by step 58, it is better, i.e., lower, than the best obtained so far, it may, in conditionally executed step 60, be saved, along with the index i which identifies the value resulting in the current best summed norm of the phase error. In an alternate embodiment, the coefficient value may be saved, instead of its index i. In steps 62 and 64, the index i is advanced and execution returns to the start of the loop, unless all values of i have been tested, in which case the outer loop index is advanced in step 68. Outer loop completion is tested in step 70; the outer loop repeats until all desired values of its index, j, have been tested.
[0048] Once optimal values, i.e., values which optimize the summed norm of the phase error, have been found for all seven coefficients, over the given pulling ranges, an improved navigation profile may be formed as RAC(t)— ARAC(t), i.e., by correcting the original coarse navigation profile with the estimated navigation profile error. Similarly, improved image block phase error profiles may be formed as εψ{(|(ΐ)— s6j j(t)
for i = 1, and j = 1, ...,J, respectively.
[0049] The phase error profiles obtained from autofocus are in general distorted by noise in the SAR image pixels. For image blocks with a large number of small bright discrete targets, the phase error has the least distortion. However, image blocks without discrete targets lead to very poor phase error profiles. The quality of the phase error profiles generated by the autofocus algorithm is determined by the discrete targets in each image block. It is proportional to the number, sharpness, and brightness of the discrete targets. A convenient measure of the quality of the phase error profile is the contrast of the processed image block.
[0050] In one embodiment, the weighting coefficients W to be used in calculating the summed norm of the phase error, and in extracting the navigation error profile, are obtained in two steps. In an embodiment with, for example, 16 image blocks, and in which a total of 7 polynomial coefficients are being determined, the number of image blocks exceeds the number of degrees of freedom in the navigation error, and a fraction of the image blocks with lower image contrast may be ignored, i.e., assigned weights of zero. The number of image blocks assigned nonzero weights must be greater than the number of degrees of freedom in the navigation error and, to avoid performing more computations than necessary, may preferably be less than 30% of the total number of image blocks. Once the image blocks to be ignored have been identified, the contrast for each of the remaining image blocks may be converted into weights using the formula
Figure imgf000010_0001
[0051] where Wy is again the weight for the phase error of the (i,j)th image block, yQy is the image contrast before auto focus, and yy is the image contrast after autofocus. This weight is thus based on the magnitude of the normalized contrast improvement, where contrast is defined in the customary manner, as
Figure imgf000010_0002
where AT is the magnitude of the ith image pixel.
[0052] In another embodiment, the navigation error profile, and, as a result, an improved navigation profile, may be obtained one point at a time using a least squares fit. Referring to FIG. 3, the phase error profile ei|/y (t) of image block (i,j) is proportional to the error RT j (t) in the slant range between PIM ])> the center 100 of image block (i,j), and the knowledge aircraft position 102, i.e. Afly (t) =— εψι j(t). This implies that the true distance from the actual aircraft position to the center 100 of image block (i,j) differs from the knowledge distance by Rt (t). Hence, the true distance is given by ||RAC(t)— PIM (i,j) || + Ai?£ / (t), where RAC(X) is again the knowledge aircraft position.
[0053] If the phase error is known at any time then one may conclude that at that time the aircraft position lies on a sphere centered at PiM(i,j) with a
radius ||RAC(t)— PIM (i,j)|| + AR (t). Since the navigation error is usually quite small, one can approximate the small portion of the sphere with a tangent plane, and assume that the aircraft position lies on a plane perpendicular to the vector RAG (t)— P1M (i,j).
[0054] Let i j De the unit vector 104 from the center 100 of image block (i,j) to the knowledge aircraft position 102 at time t. Let RlyCt) be the intersection 106 of the above mentioned plane and the line passing through RAC(t) and IM (U) and let the aircraft position estimate 108 be RAC(t). Since only a subset of the phase error profiles are needed to form an improved navigation profile, we may use, instead of the image block index (i,j), the single index k, where k = 1 ,2, . .., N. If the phase error profiles generated by the autofocus algorithm were error-free , then the aircraft position estimate would satisfy the following
overdetermined system of equations:
RAC(t)— RIx(t)^ · i(t)= 0 , where * is the inner product operator,
(IAC(t) - Ri2(t)) . p2(t) = 0 (RAC(t) - RIN(t)) . pN(t)= 0
[0055] The phase error profiles generated by the autofocus algorithm will not be perfectly free of errors; in this case a solution which most nearly satisfies, in a least-squares sense, all of the equations in this overdetermined system of equations can be obtained by a weighted least squares method. Let the distance from the aircraft position estimate to the plane normal be dx. It can be shown that
Figure imgf000011_0001
The weighted sum of these distances dt through dN is then given by
d =∑k=1Wk ||(SAG(t) - Wk(t)) . pk(t) |
[0056] Minimizing d is equivalent to minimizing
[0057] ∑l wk ||(MAc(c) - WkCO) · k(t) || .
[0058] RAC (t) may then be found as the solution of the following matrix equation, which minimizes this error in a weighted least-squares sense:
Figure imgf000011_0002
where RAG(t). The improved navigation profile
Figure imgf000011_0003
KAC(t) may then be used to generate an improved SAR image.
[0059] Directly incorporating the navigation improvement process into the SAR signal processing flow would be computationally inefficient because SAR image formation would have to be performed twice: once to generate autofocus phase error profiles for the image blocks, and once to generate an improved, well focused image using the improved navigation profile. This inefficiency may be significantly reduced by redesigning the overall SAR signal processing flow.
[0060] The efficiency may be enhanced by reducing the processing times involved in the first image formation and in the autofocus process. In the navigation profile improvement process, the number of phase error profiles need not be large: between four and six phase error profiles should be sufficient. This means that we need only process four to six image blocks in the first image formation process. However, the whole image must be processed to determine which image blocks will provide high quality phase error profiles. This may be accomplished efficiently by forming the first SAR image using a backprojection algorithm with a lower image pixel sampling rate. Reducing the sampling rate by a factor of 2 or 3 in each dimension may reduce the image formation time to 25% or 11%, respectively, compared to the time required at full sampling rate, but the image generated using a reduced sampling rate is adequate for an indication of local image contrast. Once the few image blocks with good contrast are identified based on the calculated contrast measures, autofocus is applied to generate the phase error profiles. Using this approach, the overall computational efficiency of SAR signal processing, according to embodiments of the present invention, can be increased significantly.
[0061] In particular, referring to FIG. 4, in a first step 150 of an exemplary embodiment, an initial backprojection may be performed with a reduced sample rate, to generate a SAR image. Because of the reduced sample rate, this backprojection step 150 may incur considerably less computational load and delay than a backprojection at full resolution. In a subsequent step 152, the contrast is computed over all of the image blocks in the image, and then, in step 154, image blocks with adequate contrast to be used in the following steps are identified and selected. A backprojection is then performed, in step 156, with full sample rate, for these selected image blocks. In step 156, the backprojection again incurs a reduced computational burden, in this case as a result of being performed over only a subset of the image blocks. Next, in step 158, an autofocus algorithm is used to generate a phase error profile for each image block.
[0062] As indicated by the dashed line bypassing step 160, step 160 is optional. In step 160, if it is performed, a polynomial approximation for the navigation error profile is used to find an improved navigation solution. As indicated by the dashed line bypassing step 162, this step also is optional. In step 162, if it is performed, a least squares method is used to find the navigation path which is the best fit, in a weighted least square sense, to the phase and range errors corresponding to each of the image blocks selected in step 154. If both steps 160 and 162 are performed, then the intermediate navigation profile and the intermediate image block phase error profiles resulting from step 160 are fed into step 162. Once an improved navigation solution has been found by one or both of steps 160 and 162, backprojection with full sample rate over the entire image is used, in step 164, to obtain the final SAR image.
[0063] Although limited embodiments of a wide beam SAR focusing method using a navigation solution derived from autofocus data have been specifically described and illustrated herein, many modifications and variations will be apparent to those skilled in the art. For example, in finding an estimated navigation error profile using a polynomial approximation, it may be preferred to use more or fewer than the 7 coefficients of the exemplary embodiment disclosed here, and the orders of the polynomials in which these coefficients appear may be different from those disclosed herein. Accordingly, it is to be understood that the wide beam SAR focusing method using a navigation solution derived from autofocus data employed according to principles of this invention may be embodied other than as specifically described herein. The invention is also defined in the following claims and equivalents thereof.

Claims

WHAT IS CLAIMED IS:
1. A method for forming a synthetic aperture radar (SAR) image, from SAR data and a navigation profile, the method comprising:
generating a first SAR image from the SAR data and the navigation profile;
dividing the SAR image into image blocks;
selecting a subset of the image blocks;
applying an autofocus algorithm to each of the selected image blocks to form phase error profile estimates for the selected image blocks;
generating a navigation error profile estimate from the phase error profile estimates; and
generating a second SAR image from the SAR data, the navigation profile, and the navigation error profile estimate.
2. The method of claim 1 , wherein the generating a navigation error profile estimate comprises:
approximating the navigation error profile estimate as a vector of three low-order polynomials in time and
searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the navigation error profile estimate.
3. The method of claim 2, wherein the measure of performance is the weighted summed norm of the phase error, taken over a set of selected image blocks.
4. The method of claim 3, wherein an image block is selected on the basis of having image contrast exceeding a predetermined threshold.
5. The method of claim 3, wherein the weight, for a selected image block, used in calculating the weighted summed norm of the phase error, equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
6. The method of claim 1 , wherein the generating a navigation error profile estimate comprises:
selecting a set of image blocks; and
finding, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the navigation error profile, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
7. The method of claim 6, wherein the selected image blocks are selected on the basis of having image contrast exceeding a predetermined threshold.
8. The method of claim 6, wherein the weight, for a selected image block, used in calculating the weighted sum, is a function of the image contrast before autofocus for that image block and the image contrast after autofocus for that image block.
9. The method of claim 6, wherein the weight, for a selected image block, used in calculating the weighted sum, equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
10. The method of claim 1 , wherein the generating a navigation error profile estimate comprises:
writing an intermediate navigation error profile estimate as a vector of three low-order polynomials in time;
searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the intermediate navigation error profile estimate; forming an intermediate navigation profile and an intermediate image block phase error profile using the set of values of the polynomial coefficients;
selecting a set of image blocks; and
using the intermediate navigation profile and the intermediate image block phase error profile to find, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the point in space, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
11. A method for forming an improved navigation profile from a coarse navigation profile and from a synthetic aperture radar (S AR) image, the method comprising: generating a first SAR image from the S AR data and the navigation profile;
dividing the SAR image into image blocks;
selecting a subset of the image blocks;
applying an autofocus algorithm to each of the selected image blocks to form phase error profile estimates for the selected image blocks; and generating an improved navigation profile from the coarse navigation profile and the phase error profile estimates.
12. The method of claim 11 , wherein the generating an improved navigation profile comprises:
approximating a navigation error profile estimate as a vector of three low-order polynomials in time;
searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the navigation error profile estimate; and
correcting the coarse navigation profile using the navigation error profile estimate.
13. The method of claim 12, wherein the measure of performance is the weighted summed norm of the phase error, taken over a set of selected image blocks.
14. The method of claim 13, wherein an image block is selected on the basis of having image contrast exceeding a predetermined threshold.
15. The method of claim 13, wherein the weight, for a selected image block, used in calculating the weighted summed norm of the phase error, is a function of the image contrast before autofocus for that image block and the image contrast after autofocus for that image block.
16. The method of claim 13, wherein the weight, for a selected image block, used in calculating the weighted summed norm of the phase error, equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
17. The method of claim 11, wherein the generating an improved navigation profile comprises:
selecting a set of image blocks; and
finding, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the point in space, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
18. The method of claim 17, wherein an image block is selected on the basis of having image contrast exceeding a predetermined threshold.
19. The method of claim 17, wherein the weight for an image block in the weighted sum equals (i) the reciprocal of the square root of the image contrast before autofocus for that image block, multiplied by (ii) the ratio of (a) the image contrast before autofocus for that image block to (b) the image contrast after autofocus for that image block.
20. The method of claim 11 , wherein the generating an improved navigation profile from the coarse navigation profile and the phase error profile estimates comprises: approximating an intermediate navigation error profile estimate as a vector of three low-order polynomials in time;
searching the space of polynomial coefficient values for a set of values which optimizes a measure of performance for the intermediate navigation error profile estimate; forming an intermediate navigation profile and an intermediate image block phase error profile using the set of values of the polynomial coefficients;
selecting a set of image blocks; and
using the intermediate navigation profile and the intermediate image block phase error profile to find, for each point in time, the point in space which minimizes the weighted sum of the squares of the differences between the phase errors for the selected image blocks calculated from the point in space, and the phase errors for the selected image blocks resulting from the autofocus algorithm.
PCT/US2013/036466 2012-06-28 2013-04-12 Wide beam sar focusing method using navigation solution derived from autofocus data WO2014011287A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP13785966.6A EP2867616A2 (en) 2012-06-28 2013-04-12 Wide beam sar focusing method using navigation solution derived from autofocus data
JP2015520176A JP6072240B2 (en) 2012-06-28 2013-04-12 Wide beam SAR focusing method using navigation solutions derived from autofocus data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/536,731 US9239383B2 (en) 2012-01-10 2012-06-28 Wide beam SAR focusing method using navigation solution derived from autofocus data
US13/536,731 2012-06-28

Publications (2)

Publication Number Publication Date
WO2014011287A2 true WO2014011287A2 (en) 2014-01-16
WO2014011287A3 WO2014011287A3 (en) 2014-03-13

Family

ID=49517622

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/036466 WO2014011287A2 (en) 2012-06-28 2013-04-12 Wide beam sar focusing method using navigation solution derived from autofocus data

Country Status (4)

Country Link
US (1) US9239383B2 (en)
EP (1) EP2867616A2 (en)
JP (1) JP6072240B2 (en)
WO (1) WO2014011287A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103913741A (en) * 2014-03-18 2014-07-09 电子科技大学 Synthetic aperture radar efficient autofocus BP method
CN105068071A (en) * 2015-07-16 2015-11-18 中国科学院电子学研究所 Rapid imaging method based on back-projection operator
EP3480623A4 (en) * 2016-08-01 2019-07-10 Mitsubishi Electric Corporation Synthetic-aperture radar device
CN111007468A (en) * 2019-12-25 2020-04-14 中国航空工业集团公司西安飞机设计研究所 Radar SAR imaging positioning error eliminating method
CN113050088A (en) * 2021-03-17 2021-06-29 电子科技大学 Positioning method based on video SAR shadow

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9903719B2 (en) * 2013-09-03 2018-02-27 Litel Instruments System and method for advanced navigation
US9995817B1 (en) * 2015-04-21 2018-06-12 Lockheed Martin Corporation Three dimensional direction finder with one dimensional sensor array
WO2017195340A1 (en) * 2016-05-13 2017-11-16 三菱電機株式会社 Self-position estimation device, self-position estimation method, and self-position estimation processing program
US10444347B2 (en) * 2016-11-30 2019-10-15 GM Global Technology Operations LLC Accurate self localization using automotive radar synthetic aperture radar
KR101989547B1 (en) * 2018-11-15 2019-06-14 엘아이지넥스원 주식회사 Synthetic aperture radar image restoration apparatus and method thereof
KR102156490B1 (en) * 2019-02-08 2020-09-15 서울대학교산학협력단 Image decoding apparatus based on airborn and differential method of decoding image using the same
US11789142B2 (en) * 2020-06-11 2023-10-17 Mitsubishi Electric Research Laboratories Inc. Graph-based array signal denoising for perturbed synthetic aperture radar
CN113960602B (en) * 2021-12-22 2022-05-20 中科星睿科技(北京)有限公司 Track error information generation method and device, electronic equipment and readable medium

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8630315D0 (en) * 1986-12-18 1987-04-15 Gen Electric Co Plc Synthetic aperture radar
US4853699A (en) * 1987-11-13 1989-08-01 Hughes Aircraft Company Method for cancelling azimuth ambiguity in a SAR receiver
US4851848A (en) * 1988-02-01 1989-07-25 The United States Of America As Represented By The Secretary Of The Navy Frequency agile synthetic aperture radar
JPH05215848A (en) * 1988-08-22 1993-08-27 Mitsubishi Electric Corp Image distortion correcting method and device therefor
US4999635A (en) * 1990-03-29 1991-03-12 Hughes Aircraft Company Phase difference auto focusing for synthetic aperture radar imaging
US5021789A (en) * 1990-07-02 1991-06-04 The United States Of America As Represented By The Secretary Of The Air Force Real-time high resolution autofocus system in digital radar signal processors
FR2695202B1 (en) * 1992-09-03 1994-11-10 Aerospatiale On-board navigation system for an air vehicle comprising a radar with lateral view and with aperture synthesis.
DE4311754C1 (en) * 1993-04-08 1994-06-23 Deutsche Forsch Luft Raumfahrt Method for extracting movement errors of a carrier carrying a coherent imaging radar system from raw radar data and device for carrying out the method
US5432520A (en) * 1993-10-18 1995-07-11 Hughes Aircraft Company SAR/GPS inertial method of range measurement
SE517768C2 (en) * 1995-09-21 2002-07-16 Totalfoersvarets Forskningsins A SAR radar system
US6046695A (en) * 1996-07-11 2000-04-04 Science Application International Corporation Phase gradient auto-focus for SAR images
US5937102A (en) * 1996-10-09 1999-08-10 California Institute Of Technology Image reconstruction
US6628844B1 (en) * 1997-02-19 2003-09-30 Massachusetts Institute Of Technology High definition imaging apparatus and method
US6037892A (en) * 1998-05-28 2000-03-14 Multispec Corporation Method for automatic focusing of radar or sonar imaging systems using high-order measurements
US6492932B1 (en) * 2001-06-13 2002-12-10 Raytheon Company System and method for processing squint mapped synthetic aperture radar data
JP2004198275A (en) * 2002-12-19 2004-07-15 Mitsubishi Electric Corp Synthetic aperture radar system, and image reproducing method
US6738009B1 (en) * 2002-12-27 2004-05-18 General Atomics System and method for synthetic aperture radar mapping a ground strip having extended range swath
US6781541B1 (en) * 2003-07-30 2004-08-24 Raytheon Company Estimation and correction of phase for focusing search mode SAR images formed by range migration algorithm
US6911933B1 (en) * 2004-05-14 2005-06-28 The United States Of America As Represented By The Secretary Of The Air Force Dynamic logic algorithm used for detecting slow-moving or concealed targets in synthetic aperture radar (SAR) images
US7145496B2 (en) 2004-11-23 2006-12-05 Raytheon Company Autofocus method based on successive parameter adjustments for contrast optimization
US7145498B2 (en) * 2004-11-23 2006-12-05 Raytheon Company Efficient autofocus method for swath SAR
US7277042B1 (en) * 2006-05-12 2007-10-02 Raytheon Company Compensation of flight path deviation for spotlight SAR
US7663529B2 (en) 2006-08-15 2010-02-16 General Dynamics Advanced Information Systems, Inc. Methods for two-dimensional autofocus in high resolution radar systems
US7551119B1 (en) * 2008-01-08 2009-06-23 Sandia Corporation Flight path-driven mitigation of wavefront curvature effects in SAR images
US8115666B2 (en) * 2008-04-17 2012-02-14 Mirage Systems, Inc. Ground penetrating synthetic aperture radar
US9110167B2 (en) * 2008-10-07 2015-08-18 The Boeing Company Correction of spatially variant phase error for synthetic aperture radar
DE102011010987A1 (en) * 2010-03-24 2011-09-29 Lfk-Lenkflugkörpersysteme Gmbh Navigation method for a missile
FR2960300B1 (en) * 2010-05-18 2014-01-03 Thales Sa METHOD FOR CONSTRUCTING FOCUSED RADAR IMAGES
US8861588B2 (en) * 2011-04-04 2014-10-14 The United States Of America As Represented By The Secretary Of The Army Apparatus and method for sampling and reconstruction of wide bandwidth signals below Nyquist rate
US9041585B2 (en) * 2012-01-10 2015-05-26 Raytheon Company SAR autofocus for ground penetration radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103913741A (en) * 2014-03-18 2014-07-09 电子科技大学 Synthetic aperture radar efficient autofocus BP method
CN105068071A (en) * 2015-07-16 2015-11-18 中国科学院电子学研究所 Rapid imaging method based on back-projection operator
EP3480623A4 (en) * 2016-08-01 2019-07-10 Mitsubishi Electric Corporation Synthetic-aperture radar device
US10948588B2 (en) 2016-08-01 2021-03-16 Mitsubishi Electric Corporation Synthetic-aperture radar device
CN111007468A (en) * 2019-12-25 2020-04-14 中国航空工业集团公司西安飞机设计研究所 Radar SAR imaging positioning error eliminating method
CN111007468B (en) * 2019-12-25 2023-06-23 中国航空工业集团公司西安飞机设计研究所 Radar SAR imaging positioning error eliminating method
CN113050088A (en) * 2021-03-17 2021-06-29 电子科技大学 Positioning method based on video SAR shadow
CN113050088B (en) * 2021-03-17 2022-08-02 电子科技大学 Positioning method based on video SAR shadow

Also Published As

Publication number Publication date
EP2867616A2 (en) 2015-05-06
JP2015529800A (en) 2015-10-08
US9239383B2 (en) 2016-01-19
WO2014011287A3 (en) 2014-03-13
JP6072240B2 (en) 2017-02-01
US20150061927A1 (en) 2015-03-05

Similar Documents

Publication Publication Date Title
EP2867616A2 (en) Wide beam sar focusing method using navigation solution derived from autofocus data
US20190235043A1 (en) System and method for multi-sensor multi-target 3d fusion using an unbiased measurement space
US7116265B2 (en) Recognition algorithm for the unknown target rejection based on shape statistics obtained from orthogonal distance function
CN108459321A (en) Justify the big strabismus High Resolution SAR Imaging method of model based on range-azimuth
CN107192375B (en) A kind of unmanned plane multiple image adaptive location bearing calibration based on posture of taking photo by plane
US20060109163A1 (en) Autofocus method based on successive parameter adjustments for contrast optimization
CN113156436B (en) Circular synthetic aperture radar self-focusing imaging method, system and electronic equipment
CN111551934A (en) Motion compensation self-focusing method and device for unmanned aerial vehicle SAR imaging
JP2018518691A (en) System and method for antenna analysis and verification
EP3657212A1 (en) Method and system of decomposition of composite targets on elements of a radar target signature with a super-resolution, using the total signal spectrum
JP2006343290A (en) Imaging radar device
CN105572648B (en) A kind of synthetic aperture radar echo data range migration correction method and apparatus
CN113985436A (en) Unmanned aerial vehicle three-dimensional map construction and positioning method and device based on SLAM
CN116704037B (en) Satellite lock-losing repositioning method and system based on image processing technology
KR102185307B1 (en) Method and system for high resolving object response of sar images
WO2011068442A1 (en) Method for angular focusing of signals in scanning radar systems
US9964640B2 (en) Method for phase unwrapping using confidence-based rework
CN115601278A (en) High-precision motion error compensation method based on sub-image registration
CN110736988B (en) Bistatic PFA moving object parameter estimation and imaging method
RU2672092C1 (en) Method of measuring the angular position of terrestrial fixed radio-contrast objects
KR20190093739A (en) Method and apparatus for tracking target based on phase gradient autofocus
CN112070666A (en) SAR image splicing method based on image entropy
CN115222771B (en) Target tracking method and device
EP3159651A1 (en) Improvements in and relating to missile targeting
WO2021250876A1 (en) Road shape estimation device, road shape estimation method, and road shape estimation program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13785966

Country of ref document: EP

Kind code of ref document: A2

ENP Entry into the national phase

Ref document number: 2015520176

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2013785966

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13785966

Country of ref document: EP

Kind code of ref document: A2