CN102680974A - Signal processing method of satellite-bone sliding spotlight synthetic aperture radar - Google Patents

Signal processing method of satellite-bone sliding spotlight synthetic aperture radar Download PDF

Info

Publication number
CN102680974A
CN102680974A CN2012101688202A CN201210168820A CN102680974A CN 102680974 A CN102680974 A CN 102680974A CN 2012101688202 A CN2012101688202 A CN 2012101688202A CN 201210168820 A CN201210168820 A CN 201210168820A CN 102680974 A CN102680974 A CN 102680974A
Authority
CN
China
Prior art keywords
mrow
msub
mfrac
msup
math
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012101688202A
Other languages
Chinese (zh)
Other versions
CN102680974B (en
Inventor
李财品
谭小敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Institute of Space Radio Technology
Original Assignee
Xian Institute of Space Radio Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Institute of Space Radio Technology filed Critical Xian Institute of Space Radio Technology
Priority to CN 201210168820 priority Critical patent/CN102680974B/en
Publication of CN102680974A publication Critical patent/CN102680974A/en
Application granted granted Critical
Publication of CN102680974B publication Critical patent/CN102680974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a signal processing method of a satellite-bone sliding spotlight synthetic aperture radar. The method includes: performing molecule aperture processing on original echo data of the sliding spotlight synthetic aperture radar (SAR); performing range compression and range migration correction through a chirp scaling (CS) algorithm; introducing squint angle compensation in the CS algorithm; compensating scene space-variant performance; adjusting range migration amount of all sub-apertures to be identical with Doppler parameters of full apertures in CS scalar variable factors as the standard; performing azimuth compression and quadratic term compensation in the frequency domain, and performing frequency modulation processing in the time domain; splicing data of all the sub-apertures and restoring resolution ratio of the full apertures; and enabling signals to have phase preservation performance through spectrum analyzer (SPECAN) processing. By adopting the signal processing method, imaging quality can be improved under the condition of large scene squinting.

Description

Signal processing method of satellite-borne sliding bunching synthetic aperture radar
Technical Field
The invention relates to a method for processing synthetic aperture radar signals, in particular to a signal processing method under a sliding bunching mode large-scene squint condition.
Background
Synthetic Aperture Radar (SAR) is a typical radar system for ground reconnaissance imaging, is mainly used for military reconnaissance and disaster monitoring, and is one of important microwave remote sensing instruments which can work all day long and all weather at present. At present, the common operation modes of the synthetic aperture radar mainly include a stripe mode, a scanning mode, a beam-bunching mode, a sliding beam-bunching mode, a TOPS mode, and so on. In the several SAR imaging modes, the strip mode can realize wide azimuth swath, but high resolution is difficult to realize; although a swath wide in the upward direction and the azimuthal direction can be obtained simultaneously in the scanning mode, the resolution is lower than that in the same case at the expense of the resolution; although the beam-bunching mode can obtain high resolution, the mapping zone in the azimuth direction is very small, and the size of the footprint of the beam is only one; the sliding bunching mode can break through the limitation of the bunching mode, and not only can high resolution be realized, but also a wide azimuth mapping band can be realized. The processing algorithm of the sliding spotlight mode mainly comprises a subaperture method and an azimuth preprocessing-based method. A signal processing block diagram in which the sub-aperture processes the sliding beamforming mode is shown in fig. 1. A block diagram of the sliding bunching process based on the azimuth preprocessing is shown in fig. 2. The existing signal processing method of the sliding bunching synthetic aperture radar can be summarized as follows: acquiring echo signals, reducing PRF by adopting a sub-aperture or azimuth preprocessing method, avoiding azimuth blurring of imaging, and then imaging by adopting conventional algorithms, such as CS algorithm, RD algorithm and RMA algorithm.
During the actual flight of a satellite or an airplane, due to control or other principles, the doppler frequency shift of a target pointed by the phase center of the antenna may not be zero, and the phase center of the antenna has a certain squint angle. The existence of the oblique angle causes the loss of signal-to-noise ratio during radar imaging, the reduction of azimuth ambiguity and image deviation. In addition, the range of beam irradiation of the satellite-borne synthetic aperture radar is large, and especially in high-resolution imaging, the space variability of a scene cannot be ignored. The existing sliding spotlight SAR imaging algorithm does not consider the situation under an oblique angle, does not consider the scene space-variant compensation under a large scene, does not carry out the consistency compensation on the phase of each sub-aperture (the image quality is influenced when a plurality of sub-apertures are spliced finally), and influences the imaging quality in the three aspects. Therefore, it is necessary to develop a sliding spotlight SAR imaging algorithm capable of improving image quality under large scene squint conditions.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the defects of the prior art, the signal processing method of the satellite-borne sliding bunching synthetic aperture radar is provided, and the imaging quality can be improved under the condition of large scene squint.
The invention comprises the following technical scheme:
a signal processing method of a satellite-borne sliding spotlight synthetic aperture radar is characterized by comprising the following steps:
(1) sub-aperture division is carried out on the collected original data of the sliding bunching synthetic aperture radar;
(2) and respectively processing each divided sub-aperture data as follows:
performing direction FFT; multiplying the data subjected to the direction Fourier transform by a scaling factor; performing range-to-FFT conversion, and then performing range compression and range migration correction; performing distance inverse FFT; performing azimuth compression and compensating the residual phase after inverse FFT conversion; performing quadratic term compensation after azimuth compression; then performing the direction inverse FFT; after the azimuth direction inverse FFT conversion, frequency modulation removing processing is carried out;
(3) synthesizing the processed sub-aperture data to synthesize a full aperture data;
(4) performing azimuth FFT on the full-aperture data;
(5) carrying out residual frequency modulation compensation;
(6) performing azimuth inverse FFT;
(7) the Specan compensation is performed so that the signal has phase-holding properties.
The formula of the scaling factor in the step (2) is as follows: <math> <mrow> <mi>H</mi> <mn>1</mn> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j&pi;</mi> <mo>*</mo> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>*</mo> <msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mfrac> <mrow> <mn>2</mn> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>,</mo> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> <mi>c</mi> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein, <math> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>&Delta;&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>dc</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Δα(fa)=α(fa)-α(fdc), <math> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msup> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mi>a</mi> </msub> <mi>&lambda;</mi> </mrow> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> </mrow> </math> <math> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>dc</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msup> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mi>dc</mi> </msub> <mi>&lambda;</mi> </mrow> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow> </math>
wherein f isdcDoppler center frequency for full aperture, t satellite time of flight, faAzimuth frequency, c is speed of light; r0Is the center slope of the scene, k (f)a(ii) a R) is equivalent tuning frequency; r (f)a,R0) The instantaneous distance from the scene center to the satellite; v is the speed of flight of the satellite, thetarefIs the oblique angle of view at the center of the scene, thetacAt the oblique viewing angle at the center of the full aperture, λ is the wavelength.
<math> <mrow> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msub> <mi>k</mi> <mi>r</mi> </msub> <mrow> <msub> <mi>k</mi> <mi>r</mi> </msub> <mi>R</mi> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mfrac> <mrow> <mn>2</mn> <mi>&lambda;</mi> </mrow> <msup> <mi>c</mi> <mn>2</mn> </msup> </mfrac> <mo>&CenterDot;</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mo>[</mo> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> </mrow> <mrow> <mn>3</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mfrac> </mrow> </mfrac> </mrow> </math>
krFrequency adjustment for linear signals, where R ═ R0+c/2/fs[-nrn/2:nrn/2-1],R0Is the center slope of the scene, where fsFor signal sampling rate, nrn is the number of distance-wise sampling points.
In the step (2), an azimuth compression function for azimuth compression and compensation of the residual phase is as follows:
<math> <mrow> <mi>H</mi> <mn>3</mn> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <mrow> <mn>4</mn> <mi>&pi;</mi> </mrow> <mi>&lambda;</mi> </mfrac> <mrow> <mo>(</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>[</mo> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mn>1</mn> <mo>]</mo> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j&Theta;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <msub> <mrow> <mn>2</mn> <mi>&pi;f</mi> </mrow> <mi>a</mi> </msub> <mi>V</mi> </mfrac> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein <math> <mrow> <mi>&Theta;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mi>&pi;</mi> </mrow> <msup> <mi>c</mi> <mn>2</mn> </msup> </mfrac> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>R</mi> <mo>/</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> </mrow> </math> R(fa(ii) a r) is the slant distance of the satellite to the ground target point.
Compared with the prior art, the invention has the following advantages:
the method provided by the invention is used for imaging in a sliding bunching working mode under large-scene squint. The method not only considers the compensation of the squint angle and the phase consistency compensation among the sub apertures, but also considers the space-variant property of the range migration under the condition of a large scene.
Drawings
FIG. 1 is a block diagram of a conventional sub-aperture sliding beamforming process;
FIG. 2 is a block diagram of a prior art sliding bunching process based on azimuth preprocessing;
FIG. 3 is a flow chart of a processing method of the present invention;
FIG. 4 is a sub-aperture 1 imaging;
FIG. 5 is a sub-aperture 1-point target spectrum;
FIG. 6 is sub-aperture 2 imaging;
FIG. 7 is a sub-aperture 2-point target spectrum;
FIG. 8 is a sliding spotlight SAR full aperture imaging;
fig. 9 is a full aperture imaging point target spectrum.
Detailed Description
The invention will now be further described with reference to the accompanying drawings. As shown in fig. 3, the processing method of the present invention includes the steps of:
(1) subaperture partitioning
The method comprises the steps of firstly, carrying out subaperture division on original sliding spotlight SAR data, wherein the length of the subaperture is larger than the instantaneous bandwidth of a target point of the sliding spotlight SAR. Can be according to the formula
Figure BSA00000724329400041
Determining the number of sub-aperture divisions PRF being the pulse repetition frequency, BaFor instantaneous bandwidth of the signal, krotThe chirp rate of the satellite to the center of rotation. By dividing the sub-apertures, the azimuth ambiguity of the sliding spotlight synthetic aperture radar is effectively removed, the repetition frequency of the system is reduced, the selection of the system on the pulse repetition frequency is consistent with that of the system in a stripe mode, and the data volume of the synthetic aperture radar data transmission is greatly reduced. On the other hand, the range migration amount is reduced through the division of the sub-apertures, the coupling amount of the range direction and the azimuth direction is reduced, and the decoupling processing in the imaging algorithm is facilitated.
(2) Respectively carrying out azimuth FFT (fast Fourier transform) on the divided sub-aperture data
After the azimuth Fourier transform, the method is expressed as follows:
<math> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>rect</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>X</mi> <mo>-</mo> <msub> <mi>v</mi> <mi>a</mi> </msub> <mi>t</mi> </mrow> <mi>L</mi> </mfrac> <mo>)</mo> </mrow> <mi>rect</mi> <mrow> <mo>(</mo> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>2</mn> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>/</mo> <mi>c</mi> <mo>)</mo> </mrow> <mo>/</mo> <mi>T</mi> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mi>j</mi> <mn>4</mn> <mi>&pi;</mi> <mfrac> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>&lambda;</mi> </mfrac> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mi>j</mi> <mfrac> <msub> <mrow> <mn>2</mn> <mi>&pi;f</mi> </mrow> <mi>a</mi> </msub> <mi>V</mi> </mfrac> <mi>cos</mi> <mi>&theta;</mi> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mi>j&pi;k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>2</mn> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>/</mo> <mi>c</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </math>
in the formula,
Figure BSA00000724329400043
is an azimuth window function, X is the azimuth starting position of the ground target point, the length of the footprint of the L wave beam, vaFor the beam footprint at ground speed, t is the satellite time of flight, rect ((t-2R (f)a(ii) a R)/c)/T) is a range window function, T is a range echo delay period, R (f)a(ii) a r) is the slant distance from the satellite to the ground target point, r is the closest distance from the platform to the ground, theta is the slant angle, lambda is the wavelength, V is the satellite velocity, k (f)a(ii) a R) is the equivalent tuning frequency, faThe azimuth frequency, c is the speed of light.
(3) Multiplying the data after the direction Fourier transform by a scaling factor
<math> <mrow> <mi>H</mi> <mn>1</mn> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j&pi;</mi> <mo>*</mo> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>*</mo> <msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mfrac> <mrow> <mn>2</mn> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>,</mo> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> <mi>c</mi> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mo>)</mo> </mrow> </mrow> </math>
The CS factor at this time is: <math> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>&Delta;&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>dc</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
Δα(fa)=α(fa)-α(fdc)
<math> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msup> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mi>a</mi> </msub> <mi>&lambda;</mi> </mrow> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow> </math>
<math> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>dc</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msup> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mi>dc</mi> </msub> <mi>&lambda;</mi> </mrow> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow> </math>
R(fa,R0) Instantaneous distance from the center of the scene to the satellite, fdcDoppler center frequency, theta, for a synthetic aperture divided by a front full aperturecAt an oblique viewing angle at the center of the full aperture, thetarefIs the squint angle at the center of the scene. Here, it should be noted that: for consistency of final sub-aperture splicing, the CS factor for changing the frequency modulation scale needs to be normalized, that is, the CS factor value of each sub-aperture is made consistent through normalization processing.
The null-variant of a large scene is considered in the equivalent frequency modulation rate, <math> <mrow> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msub> <mi>k</mi> <mi>r</mi> </msub> <mrow> <msub> <mi>k</mi> <mi>r</mi> </msub> <mi>R</mi> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mfrac> <mrow> <mn>2</mn> <mi>&lambda;</mi> </mrow> <msup> <mi>c</mi> <mn>2</mn> </msup> </mfrac> <mo>&CenterDot;</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mo>[</mo> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> </mrow> <mrow> <mn>3</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mfrac> </mrow> </mfrac> </mrow> </math>
krfrequency adjustment for linear signals, where R ═ R0+c/2/fs[-nrn/2:nrn/2-1],R0Is the center slope of the scene, where fsFor signal sampling rate, nrn is the number of distance-wise sampling points.
(4) Then, distance-to-FFT conversion is carried out
(5) Then, distance compression and distance migration correction are carried out
After the operation of the previous step, distance compression and distance migration correction are needed, and the phase function is
<math> <mrow> <mi>H</mi> <mn>2</mn> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j&pi;</mi> <mfrac> <mn>1</mn> <mrow> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <msubsup> <mi>f</mi> <mi>r</mi> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <mrow> <mn>4</mn> <mi>&pi;R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> <mi>c</mi> </mfrac> <msub> <mi>f</mi> <mi>r</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
frIs the range frequency.
(6) The distance is transformed to the inverse FFT,
(7) and performing azimuth compression after transformation and compensating the residual phase multiplied by the following azimuth compression function:
<math> <mrow> <mi>H</mi> <mn>3</mn> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <mrow> <mn>4</mn> <mi>&pi;</mi> </mrow> <mi>&lambda;</mi> </mfrac> <mrow> <mo>(</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>[</mo> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mn>1</mn> <mo>]</mo> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j&Theta;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <msub> <mrow> <mn>2</mn> <mi>&pi;f</mi> </mrow> <mi>a</mi> </msub> <mi>V</mi> </mfrac> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <mi>&Theta;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mi>&pi;</mi> </mrow> <msup> <mi>c</mi> <mn>2</mn> </msup> </mfrac> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>R</mi> <mo>/</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
(8) quadratic term compensation after azimuth compression
Multiplication by a quadratic compensation term function H4:
<math> <mrow> <msub> <mi>H</mi> <mn>4</mn> </msub> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <msubsup> <mi>&pi;f</mi> <mi>a</mi> <mn>2</mn> </msubsup> <mrow> <mi>kscl</mi> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow> </math>
wherein <math> <mrow> <mi>kscl</mi> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mn>2</mn> <mi>V</mi> </mrow> <mn>2</mn> </msup> <mrow> <msub> <mi>&lambda;r</mi> <mi>scl</mi> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein r scl ( r ) = r scl 0 r rot 0 r rot ( r ) , r rot ( r ) = r rot 0 - r 1 - r scl 0 / r rot 0 , rscl(r) is a scene slope function, rrot(r) is a function of the rotational slope distance, rrot0Distance of the platform from the point of rotation, rscl0The selection of (1) is related to the image azimuth interval, and generally takes the value as the distance from the platform to the ground target point.
(9) Azimuth inverse FFT transform
(10) After conversion, the frequency is removed
After the direction inverse FFT, in order to further reduce the processing bandwidth of the direction, a frequency modulation removing process H is adopted5=exp(-jπkrot(r)(ta-tmid) Therein), wherein
Figure BSA00000724329400067
taIs azimuth time, tmidThe scene center time.
(11) And (4) processing the data of each sub-aperture according to the steps (2) to (10), and performing sub-aperture synthesis according to the time sequence to synthesize data of a full aperture.
(12) Performing azimuth FFT processing on the processed full aperture data
(13) Then, residual frequency modulation compensation is carried out
Multiplying the synthesized full aperture data by H6
<math> <mrow> <msub> <mi>H</mi> <mn>6</mn> </msub> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <mi>&pi;</mi> <msub> <mi>k</mi> <mi>eff</mi> </msub> </mfrac> <msubsup> <mi>f</mi> <mi>a</mi> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Wherein k iseff(r)=kscl(r)-krot(r)
(14) Performing an azimuth inverse FFT transform
(15) Finally, performing spectrum analysis (Specan) compensation
In order to make the final imaging result phase-preserving, the following phase function needs to be multiplied:
<math> <mrow> <msub> <mi>H</mi> <mn>7</mn> </msub> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <msub> <mi>&pi;k</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mfrac> <msub> <mi>r</mi> <mrow> <mi>rscl</mi> <mn>0</mn> </mrow> </msub> <msub> <mi>r</mi> <mrow> <mi>rot</mi> <mn>0</mn> </mrow> </msub> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <msub> <mi>t</mi> <mi>mid</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <msub> <mi>k</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mn>2</mn> <mi>V</mi> </mrow> <mn>2</mn> </msup> <mrow> <mi>&lambda;</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>rot</mi> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>r</mi> <mi>scl</mi> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>.</mo> </mrow> </math>
the steps consider the processing under the oblique angle, and the compensation of the oblique angle is introduced into the CS algorithm. Such as a factor <math> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msup> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mi>a</mi> </msub> <mi>&lambda;</mi> </mrow> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> </mrow> </math> Azimuthal compression function <math> <mrow> <mi>H</mi> <mn>3</mn> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <mrow> <mn>4</mn> <mi>&pi;</mi> </mrow> <mi>&lambda;</mi> </mfrac> <mrow> <mo>(</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>[</mo> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mn>1</mn> <mo>]</mo> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j&Theta;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <msub> <mrow> <mn>2</mn> <mi>&pi;f</mi> </mrow> <mi>a</mi> </msub> <mi>V</mi> </mfrac> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> Wherein <math> <mrow> <mi>&Theta;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mi>&pi;</mi> </mrow> <msup> <mi>c</mi> <mn>2</mn> </msup> </mfrac> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>R</mi> <mo>/</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math> Etc. introduce compensation for squint angles.
Performing point target simulation on the large-scene squint SAR sliding bunching algorithm, wherein the selected simulation parameters are as follows:
center frequency: 9.6GHz
Pulse repetition frequency: (ii) a 3800Hz
Signal bandwidth: 600MHz
Azimuth resolution: 1 m
The slant distance of the rotating center: 1234 km
Scene center slope distance: 617 km
The oblique angle of the corresponding antenna at the center of the scene is as follows: 3 degree
Antenna aperture: 4 m
Scene size: 15 kilometers (distance direction) × 20 kilometers (azimuth direction)
Point target placement in scene center
As can be seen from FIGS. 5 and 7, the sub-aperture data after being processed by the method achieves good focusing, the side ratio of the peak value is about-13 dB, and the integral side lobe ratio is about-9.5 dB. Since the final step of the imaging algorithm uses de-modulation, the azimuthal pixel spacing is now
<math> <mrow> <mi>&delta;</mi> <mo>=</mo> <mfrac> <mrow> <mi>&lambda;</mi> <mo>&CenterDot;</mo> <mi>r</mi> <mo>&CenterDot;</mo> <mi>PRF</mi> </mrow> <mrow> <mn>2</mn> <mo>&CenterDot;</mo> <mi>V</mi> <mo>&CenterDot;</mo> <mi>N</mi> </mrow> </mfrac> </mrow> </math>
Wherein, N is the number of azimuth processing points, and according to the method, the azimuth pixel interval is 0.3662 after the two sub-apertures are divided, and the pixel interval is 0.1881 after the full aperture is synthesized. In fig. 4 and 6, the separation at 3dB widths is about 6 and 5 or so, the sub-aperture imaging resolution is about 2 meters, while fig. 8 is improved to 1 meter after full aperture processing. Therefore, the full-aperture processing under the large-scene squint not only improves the resolution ratio, but also obtains good focusing effect.
The invention is not described in detail and is within the knowledge of a person skilled in the art.

Claims (4)

1. A signal processing method of a satellite-borne sliding spotlight synthetic aperture radar is characterized by comprising the following steps:
(1) sub-aperture division is carried out on the collected original data of the sliding bunching synthetic aperture radar;
(2) and respectively processing each divided sub-aperture data as follows:
performing direction FFT; multiplying the data subjected to the direction Fourier transform by a scaling factor; performing range-to-FFT conversion, and then performing range compression and range migration correction; performing distance inverse FFT; performing azimuth compression and compensating the residual phase after inverse FFT conversion; performing quadratic term compensation after azimuth compression; then performing the direction inverse FFT; after the azimuth direction inverse FFT conversion, frequency modulation removing processing is carried out;
(3) synthesizing the processed sub-aperture data to synthesize a full aperture data;
(4) performing azimuth FFT on the full-aperture data;
(5) carrying out residual frequency modulation compensation;
(6) performing azimuth inverse FFT;
(7) the Specan compensation is performed so that the signal has phase-holding properties.
2. The method of claim 1, wherein: the formula of the scaling factor in the step (2) is as follows: <math> <mrow> <mi>H</mi> <mn>1</mn> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j&pi;</mi> <mo>*</mo> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>*</mo> <msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mfrac> <mrow> <mn>2</mn> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>,</mo> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> <mi>c</mi> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein, <math> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>&Delta;&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>dc</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Δα(fa)=α(fa)-α(fdc), <math> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msup> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mi>a</mi> </msub> <mi>&lambda;</mi> </mrow> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>,</mo> </mrow> </math> <math> <mrow> <mi>&alpha;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>dc</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msup> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mi>dc</mi> </msub> <mi>&lambda;</mi> </mrow> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow> </math>
wherein f isdcDoppler center frequency for full aperture, t satellite time of flight, faAzimuth frequency, c is speed of light; r0Is the center slope of the scene, k (f)a(ii) a R) is equivalent tuning frequency; r (f)a,R0) The instantaneous distance from the scene center to the satellite; v is the speed of flight of the satellite, thetarefIs the oblique angle of view at the center of the scene, thetacAt the oblique viewing angle at the center of the full aperture, λ is the wavelength.
3. The method of claim 2, wherein:
<math> <mrow> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msub> <mi>k</mi> <mi>r</mi> </msub> <mrow> <msub> <mi>k</mi> <mi>r</mi> </msub> <mi>R</mi> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mfrac> <mrow> <mn>2</mn> <mi>&lambda;</mi> </mrow> <msup> <mi>c</mi> <mn>2</mn> </msup> </mfrac> <mo>&CenterDot;</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mo>[</mo> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> </mrow> <mrow> <mn>3</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mfrac> </mrow> </mfrac> </mrow> </math>
krfrequency adjustment for linear signals, where R ═ R0+c/2/fs[-nrn/2:nrn/2-1],R0Is the center slope of the scene, where fsFor signal sampling rate, nrn is the number of distance-wise sampling points.
4. The method of claim 3, wherein: in the step (2), an azimuth compression function for azimuth compression and compensation of the residual phase is as follows:
<math> <mrow> <mi>H</mi> <mn>3</mn> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <mrow> <mn>4</mn> <mi>&pi;</mi> </mrow> <mi>&lambda;</mi> </mfrac> <mrow> <mo>(</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>[</mo> <mi>sin</mi> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <msqrt> <mn>1</mn> <mo>-</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>&lambda;f</mi> <mi>a</mi> </msub> <mrow> <mn>2</mn> <mi>V</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mn>1</mn> <mo>]</mo> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j&Theta;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>exp</mi> <mrow> <mo>(</mo> <mi>j</mi> <mfrac> <msub> <mrow> <mn>2</mn> <mi>&pi;f</mi> </mrow> <mi>a</mi> </msub> <mi>V</mi> </mfrac> <mi>cos</mi> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein <math> <mrow> <mi>&Theta;</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mi>&pi;</mi> </mrow> <msup> <mi>c</mi> <mn>2</mn> </msup> </mfrac> <mi>k</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>;</mo> <mi>R</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>R</mi> <mo>/</mo> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>ref</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> </mrow> </math> R(fa(ii) a r) is the slant distance of the satellite to the ground target point.
CN 201210168820 2012-05-25 2012-05-25 Signal processing method of satellite-bone sliding spotlight synthetic aperture radar Active CN102680974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201210168820 CN102680974B (en) 2012-05-25 2012-05-25 Signal processing method of satellite-bone sliding spotlight synthetic aperture radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201210168820 CN102680974B (en) 2012-05-25 2012-05-25 Signal processing method of satellite-bone sliding spotlight synthetic aperture radar

Publications (2)

Publication Number Publication Date
CN102680974A true CN102680974A (en) 2012-09-19
CN102680974B CN102680974B (en) 2013-08-28

Family

ID=46813186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201210168820 Active CN102680974B (en) 2012-05-25 2012-05-25 Signal processing method of satellite-bone sliding spotlight synthetic aperture radar

Country Status (1)

Country Link
CN (1) CN102680974B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103728618A (en) * 2014-01-16 2014-04-16 中国科学院电子学研究所 Implementation method of high resolution and wide swath spaceborne SAR (Synthetic Aperture Radar) system
CN104391297A (en) * 2014-11-17 2015-03-04 南京航空航天大学 Sub-aperture partition PFA (Polar Format Algorithm) radar imaging method
CN104678393A (en) * 2015-01-30 2015-06-03 南京航空航天大学 Subaperture wave number domain imaging method for squint sliding spotlight SAR (Synthetic Aperture Radar)
CN105629231A (en) * 2014-11-06 2016-06-01 航天恒星科技有限公司 Method and system for splicing SAR sub-aperture
CN106950567A (en) * 2017-03-30 2017-07-14 中国人民解放军国防科学技术大学 Ultra wide band based on high-order sub-aperture CS slides poly- SAR image processing methods
CN107390217A (en) * 2017-07-19 2017-11-24 中国人民解放军国防科学技术大学 The step-scan umber of pulse design method of Sliding spotlight SAR
CN108267736A (en) * 2017-12-20 2018-07-10 西安空间无线电技术研究所 A kind of GEO SAR staring imagings mode orientation fuzziness determines method
CN108318867A (en) * 2017-11-23 2018-07-24 北京遥感设备研究所 A kind of range migration correction method of sliding window arteries and veins group for alpha-beta tracking filter
CN109613507A (en) * 2018-12-21 2019-04-12 北京理工大学 A kind of detection method for high-order maneuvering target radar return
CN110058232A (en) * 2019-04-19 2019-07-26 北京空间飞行器总体设计部 A kind of big strabismus sliding beam bunching mode echo-signal orientation preprocess method of satellite-borne SAR and system
CN113359132A (en) * 2021-04-30 2021-09-07 西安电子科技大学 Real-time imaging method and device for spaceborne squint synthetic aperture radar
CN113376632A (en) * 2021-05-18 2021-09-10 南京航空航天大学 Large squint airborne SAR imaging method based on pretreatment and improved PFA

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007113469A1 (en) * 2006-03-31 2007-10-11 Qinetiq Limited System and method for processing imagery from synthetic aperture systems
CN101581780A (en) * 2008-05-14 2009-11-18 中国科学院电子学研究所 Three-dimensional focus imaging method of side-looking chromatography synthetic aperture radar
CN102288964A (en) * 2011-08-19 2011-12-21 中国资源卫星应用中心 Imaging processing method for spaceborne high-resolution synthetic aperture radar

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007113469A1 (en) * 2006-03-31 2007-10-11 Qinetiq Limited System and method for processing imagery from synthetic aperture systems
CN101581780A (en) * 2008-05-14 2009-11-18 中国科学院电子学研究所 Three-dimensional focus imaging method of side-looking chromatography synthetic aperture radar
CN102288964A (en) * 2011-08-19 2011-12-21 中国资源卫星应用中心 Imaging processing method for spaceborne high-resolution synthetic aperture radar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALBERTO MOREIRA等: "Airborne SAR Prosscing of Highly Squinted Data Using a Chirp Scaling Approach with Integrated Motion Compensation", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 32, no. 5, 30 September 1994 (1994-09-30), XP000670516, DOI: doi:10.1109/36.312891 *
兰天: "滑动聚束式SAR技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 2011, no. 2, 31 December 2011 (2011-12-31) *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103728618A (en) * 2014-01-16 2014-04-16 中国科学院电子学研究所 Implementation method of high resolution and wide swath spaceborne SAR (Synthetic Aperture Radar) system
CN103728618B (en) * 2014-01-16 2015-12-30 中国科学院电子学研究所 The satellite-borne SAR system implementation method of a kind of high resolving power, wide swath
CN105629231B (en) * 2014-11-06 2018-08-28 航天恒星科技有限公司 A kind of sub-aperture stitching method and system of SAR
CN105629231A (en) * 2014-11-06 2016-06-01 航天恒星科技有限公司 Method and system for splicing SAR sub-aperture
CN104391297A (en) * 2014-11-17 2015-03-04 南京航空航天大学 Sub-aperture partition PFA (Polar Format Algorithm) radar imaging method
CN104678393B (en) * 2015-01-30 2017-01-18 南京航空航天大学 Subaperture wave number domain imaging method for squint sliding spotlight SAR (Synthetic Aperture Radar)
CN104678393A (en) * 2015-01-30 2015-06-03 南京航空航天大学 Subaperture wave number domain imaging method for squint sliding spotlight SAR (Synthetic Aperture Radar)
CN106950567B (en) * 2017-03-30 2019-05-14 中国人民解放军国防科学技术大学 The sliding poly- SAR image processing method of ultra wide band based on high-order sub-aperture CS
CN106950567A (en) * 2017-03-30 2017-07-14 中国人民解放军国防科学技术大学 Ultra wide band based on high-order sub-aperture CS slides poly- SAR image processing methods
CN107390217A (en) * 2017-07-19 2017-11-24 中国人民解放军国防科学技术大学 The step-scan umber of pulse design method of Sliding spotlight SAR
CN107390217B (en) * 2017-07-19 2019-05-31 中国人民解放军国防科学技术大学 The step-scan umber of pulse design method of Sliding spotlight SAR
CN108318867A (en) * 2017-11-23 2018-07-24 北京遥感设备研究所 A kind of range migration correction method of sliding window arteries and veins group for alpha-beta tracking filter
CN108318867B (en) * 2017-11-23 2020-01-14 北京遥感设备研究所 Range migration correction method of sliding window pulse group aiming at alpha-beta tracking filtering
CN108267736A (en) * 2017-12-20 2018-07-10 西安空间无线电技术研究所 A kind of GEO SAR staring imagings mode orientation fuzziness determines method
CN108267736B (en) * 2017-12-20 2019-11-29 西安空间无线电技术研究所 A kind of GEO SAR staring imaging mode orientation fuzziness determines method
CN109613507B (en) * 2018-12-21 2021-04-06 北京理工大学 Detection method for high-order maneuvering target radar echo
CN109613507A (en) * 2018-12-21 2019-04-12 北京理工大学 A kind of detection method for high-order maneuvering target radar return
CN110058232A (en) * 2019-04-19 2019-07-26 北京空间飞行器总体设计部 A kind of big strabismus sliding beam bunching mode echo-signal orientation preprocess method of satellite-borne SAR and system
CN110058232B (en) * 2019-04-19 2021-04-13 北京空间飞行器总体设计部 Satellite-borne SAR large squint sliding bunching mode echo signal azimuth preprocessing method and system
CN113359132A (en) * 2021-04-30 2021-09-07 西安电子科技大学 Real-time imaging method and device for spaceborne squint synthetic aperture radar
CN113376632A (en) * 2021-05-18 2021-09-10 南京航空航天大学 Large squint airborne SAR imaging method based on pretreatment and improved PFA
CN113376632B (en) * 2021-05-18 2023-12-15 南京航空航天大学 Large strabismus airborne SAR imaging method based on pretreatment and improved PFA

Also Published As

Publication number Publication date
CN102680974B (en) 2013-08-28

Similar Documents

Publication Publication Date Title
CN102680974B (en) Signal processing method of satellite-bone sliding spotlight synthetic aperture radar
CN107741586B (en) Satellite-borne Ka InSAR signal processing method based on DBF-TOPS weighting
Cantalloube et al. Airborne X-band SAR imaging with 10 cm resolution: Technical challenge and preliminary results
CN104007440B (en) One accelerated decomposition rear orientation projection spot beam SAR formation method
US6608585B2 (en) High-definition imaging apparatus and method
USH1720H (en) Time frequency processor for radar imaging of moving targets
CN102288964A (en) Imaging processing method for spaceborne high-resolution synthetic aperture radar
CN102879784B (en) Unified imaging method for synthetic aperture radar (SAR) in four modes
CN110632594B (en) Long-wavelength spaceborne SAR imaging method
CN104865571A (en) Multi-channel multi-sub-band sliding-spotlight-mode SAR imaging method
CN106932778B (en) Orientation multichannel FMCW SAR slides spotlight imaging method
CN104931966A (en) DCS algorithm-based satellite-borne video SAR (synthetic aperture radar) imaging processing method
CN114545411B (en) Polar coordinate format multimode high-resolution SAR imaging method based on engineering realization
CN104062657A (en) Generalized polar coordinate imaging method for synthetic aperture radar (SAR)
CN107942327A (en) Single channel HRWS SAR imaging methods based on impulse phase coding
Zuo et al. Unified coordinate system algorithm for terahertz video-SAR image formation
CN116794612A (en) Synthetic aperture radar distance-aperture space-variant motion compensation method based on linear transformation
Fan et al. High frame-rate and low-latency video SAR based on robust Doppler parameters estimation in the terahertz regime
CN105974409B (en) Satellite-borne sliding spotlight MIMO-SAR imaging method based on multi-frequency sub-band concurrence
CN105182335B (en) Geostationary orbit SAR imaging methods based on singular value decomposition
Chen et al. Very High-Resolution Synthetic Aperture Radar Systems and Imaging: A Review
CN112180368B (en) Data processing method, device, system and medium for multi-channel sliding bunching SAR
CN105005045A (en) High-speed target ISAR stepped frequency signal synthesis method based on signal preprocessing
CN111880179A (en) Imaging method of missile-borne arc diving high squint TOPS SAR
CN116719027A (en) Star-machine double-base SAR imaging method in bidirectional sliding beam focusing mode

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant