CN111190181A - Real-time imaging processing method for unmanned aerial vehicle SAR (synthetic aperture radar) of bumpy platform - Google Patents

Real-time imaging processing method for unmanned aerial vehicle SAR (synthetic aperture radar) of bumpy platform Download PDF

Info

Publication number
CN111190181A
CN111190181A CN202010021589.9A CN202010021589A CN111190181A CN 111190181 A CN111190181 A CN 111190181A CN 202010021589 A CN202010021589 A CN 202010021589A CN 111190181 A CN111190181 A CN 111190181A
Authority
CN
China
Prior art keywords
signal
azimuth
frequency
distance
dsp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010021589.9A
Other languages
Chinese (zh)
Other versions
CN111190181B (en
Inventor
梁毅
张罡
范家赫
陈文平
赵衍一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202010021589.9A priority Critical patent/CN111190181B/en
Publication of CN111190181A publication Critical patent/CN111190181A/en
Application granted granted Critical
Publication of CN111190181B publication Critical patent/CN111190181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9004SAR image acquisition techniques
    • G01S13/9019Auto-focussing of the SAR signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9004SAR image acquisition techniques
    • G01S13/9011SAR image acquisition techniques with frequency domain processing of the SAR signals in azimuth

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention belongs to the field of unmanned aerial vehicle remote sensing mapping, and discloses a bumpy platform unmanned airborne SAR real-time imaging processing method, wherein the unmanned airborne SAR real-time imaging processing based on a heterogeneous multi-core chip utilizes the advantage of high computation efficiency of an FPGA to execute digital down conversion, distance pulse pressure and azimuth pre-filtering processing which mainly comprise multiplication and accumulation operation, and the FPGA performs data distribution and task allocation according to DSP resources; by utilizing the advantage of high programming flexibility of the DSP, the method executes the imaging processing of the bumping platform SAR based on multi-level motion compensation, the DSP end performs motion compensation processing based on inertial navigation information, the DSP end performs motion compensation processing based on data, the DSP end performs image domain self-focusing processing and other multiple motion error compensation technologies, realizes high-resolution imaging of the bumping platform represented by an unmanned aerial vehicle, greatly improves the utilization rate of hardware resources, and has good focusing effect and strong robustness on the real-time imaging processing of the bumping platform SAR.

Description

Real-time imaging processing method for unmanned aerial vehicle SAR (synthetic aperture radar) of bumpy platform
Technical Field
The invention relates to the field of unmanned aerial vehicle remote sensing mapping, in particular to a bumpy platform unmanned aerial vehicle-mounted SAR real-time imaging processing method.
Background
The Synthetic Aperture Radar (SAR) can acquire a two-dimensional high-resolution image of a ground scene, and has a wide application prospect in the fields of remote sensing detection and topographic mapping. Meanwhile, the unmanned aerial vehicle platform has the unique advantages of low cost, small size and the like, and the SAR carried on the unmanned aerial vehicle platform can be organically combined with the advantages of the SAR and the unmanned aerial vehicle platform, so that the unmanned aerial vehicle SAR becomes a research focus in recent years.
The unmanned aerial vehicle platform is light in weight, and its flight path is easily influenced by weather factors such as crosswind, air current to introduce great motion error, belong to typical jolting platform. Most of the existing solutions are combined with an Inertial Navigation System (INS) to perform motion compensation processing, most of motion errors along the direction of a beam line of sight (LOS) are eliminated, and imaging requirements can be met when residual motion errors are smaller than half of a resolution unit. However, for high-resolution SAR imaging, the current situation that the INS accuracy is insufficient is limited, the residual motion error still has a certain influence on focusing, and the image has phenomena such as defocusing and tailing to a certain extent. At the same time, the bumpiness of the stage can result in non-uniform position sampling, which can lead to phase errors in the direction perpendicular to the LOS, again with a non-negligible effect on focusing.
With the rapid development of semiconductor technology, the operational efficiency of hardware platforms such as Field Programmable Gate Arrays (FPGAs), Digital Signal Processors (DSPs) and the like is further improved, and the method is more applied to the field of SAR real-time imaging. In engineering practice, cost and power consumption are two major factors that trade off hardware platforms. Most of the existing SAR real-time processors adopt a chip stacking mode, imaging real-time performance is met, meanwhile, a large amount of performance redundancy is caused, and control cost and power consumption reduction are very unfavorable. Therefore, the engineering scheme focuses more on how to fully utilize hardware resources, and exerts the advantages of each platform while performing real-time processing, so as to achieve the balance between algorithm performance and cost and power consumption.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a bumpy platform unmanned aerial vehicle-mounted SAR real-time imaging processing method, which is based on the unmanned aerial vehicle-mounted SAR real-time imaging processing of a heterogeneous multi-core chip, realizes high-resolution imaging of a bumpy platform represented by an unmanned aerial vehicle, greatly improves the utilization rate of hardware resources, and has good focusing effect on the bumpy platform SAR real-time imaging processing and strong robustness.
The technical idea of the invention is as follows:
(1) improve traditional formation of image flow, eliminate the influence of jolting platform motion error: (1a) correcting the deviation of the platform from an ideal track according to INS parameters, and mainly eliminating phase errors caused by motion errors along the LOS direction of the beam; (1b) aiming at the problem that the Doppler modulation frequency is greatly influenced by the motion parameters of the bumping platform, estimating and compensating the residual Doppler modulation frequency by using an image bias (MD) algorithm, thereby greatly improving the focusing effect of the image; (1c) and performing phase gradient self-focusing (PGA) processing to further compensate residual phase errors caused by acceleration in an image domain.
(2) The real-time processing software architecture based on the heterogeneous multi-core chip is designed by combining the characteristics of an FPGA chip XC7A100T and a DSP chip TMS320C 6678: (2a) the advantage of high operation efficiency of the FPGA is fully exerted, and Digital Down Conversion (DDC), distance pulse voltage (PC) and pre-filtering (PF) processing which mainly comprise multiply-accumulate operation are executed at the FPGA end; (2b) partitioning the data matrix along the distance direction, and distributing the data to a plurality of DSP chips through the FPGA; (2c) and (3) the advantage of high flexibility of DSP programming is fully played, and the improved pitch platform SAR imaging algorithm in the step (1) is executed at the DSP end.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme.
A bumpy platform unmanned aerial vehicle carries SAR real-time imaging processing method, including the following steps:
step 1, receiving and transmitting an LFM pulse signal by using an unmanned airborne SAR system to obtain an intermediate frequency echo real signal sr(tr) (ii) a Using analog-to-digital converter pairs sr(tr) After sampling and quantization, a discrete intermediate frequency echo real signal s is obtainedr(k) FPGA end pair sr(k) Performing digital down-conversion processing to generate zero intermediate frequency orthogonal dual-path signal sI(k) And sQ(k) And integrated as time domain echoes ss (t)r,ta;RS)=sI(k)+jsQ(k) (ii) a Wherein, trAs a fast time variable of distance, taFor azimuthal slow time variation, RSThe center slant distance of the scene is shown, and j is an imaginary number unit;
step 2, the FPGA end judges the bandwidth B of the transmitting signalrAnd ADC sampling frequency fsThe relation between the time domain echo ss (t) and the time domain echo t is selected to be a direct sampling mode or a de-skew moder,ta;RS) Distance direction pulse compression processing is carried out to obtain a distance direction pulse compression signal ssPC,R(tr,ta;RS);
Step 3, FPGA end-to-end distance direction pulse compression signal ssPC,R(tr,ta;RS) Performing azimuth pre-filtering to obtain an azimuth filtered signal ssPF,R(tr,ta;RS);
Step 4, the FPGA end enables the azimuth-filtered signal ssPF,R(tr,ta;RS) Partitioning along the distance direction to obtain a plurality of data blocks, wherein the number of overlapping points of adjacent data blocks is G, and each data block is sent to a DSP end; wherein each data block is denoted as ssn,R(tr,ta;RS) N is the number of data blocks;
step 5, the DSP end obtains the phase error
Figure BDA0002360957910000031
Using said phase error
Figure BDA0002360957910000032
For ssn,R(tr,ta;RS) Performing INS-based motion compensation to obtain an INS-based motion compensation signal ssINSn,R(tr,ta;RS);
Step 6, the DSP end pair is used for motion compensation signal ss based on INSINSn,R(tr,ta;RS) Matrix transposition is carried out to obtain a transposed signal ssINSn,A(tr,ta;RS);
Step 7, DSP terminalSignals ss after inversionINSn,A(tr,ta;RS) Sequentially carrying out distance Fourier transform and azimuth Fourier transform to obtain two-dimensional frequency domain echo signal SSINSn,A(fr,fa;RS) (ii) a Constructing a range migration correction function HRCMC(fr,fa;RS) Two-dimensional frequency domain echo signal SSINSn,A(fr,fa;RS) And HRCMC(fr,fa;RS) Multiplying and performing two-dimensional inverse Fourier transform to obtain range migration correction signal ssRCMCn,A(tr,ta;RS);
Step 8, the DSP end constructs a phase compensation function HMDCorrection of range migration signal ssRCMCn,A(tr,ta;RS) And HMDMultiplying to obtain a data-based motion compensated signal ssMDn,A(tr,ta;RS);
Step 9, the DSP port pair motion compensation signal ss based on dataMDn,A(tr,ta;RS) Performing direction Fourier transform to obtain a signal sS in a range-Doppler domainMDn,A(tr,fa;RS) (ii) a Constructing an orientation matching filter function, and adopting the orientation matching filter function to sSMDn,A(tr,fa;RS) Performing azimuth compression to obtain an azimuth compression signal ssACn,A(tr,ta);
Step 10, the DSP end obtains an estimation value of the azimuth phase error by utilizing a phase gradient self-focusing algorithm, and an azimuth compression signal ss is obtained according to the estimation value of the azimuth phase errorACn,A(tr,ta) Corrected to obtain accurate range block focusing signal ssPGAn,A(tr,ta);
Step 11, obtaining accurate range block focusing signal ss from multiple DSP terminalsPGAn,A(tr,ta) And carrying out image splicing according to the number G of the overlapped points of the adjacent data blocks, and assembling into a complete SAR image.
Compared with the prior art, the invention has the beneficial effects that:
(1) the traditional unmanned aerial vehicle SAR imaging algorithm is low in motion error containment degree, and when the platform jolts greatly, the imaging result is prone to defocusing, tailing and the like. The invention fully considers the flight characteristics of a jolting platform and the influence of the motion error on the imaging performance, and makes a plurality of effective improvements aiming at the traditional strip mode imaging algorithm, such as a plurality of motion error compensation technologies of performing motion compensation processing based on inertial navigation information at a DSP end, performing motion compensation processing based on data at the DSP end, performing image domain self-focusing processing at the DSP end, and the like. Compared with the existing bumpy platform unmanned aerial vehicle SAR imaging algorithm, the method has the advantages of better focusing effect and stronger robustness.
(2) Most of the existing real-time processing technologies are designed in a chip stacking mode, so that part of computing capacity and storage resources are idle and wasted. The advantages of heterogeneous multi-core chips are fully exerted, the advantage of high computational efficiency of the FPGA is utilized, digital down conversion, distance pulse pressure and azimuth pre-filtering processing which take multiply-accumulate operation as a main body are executed, and the FPGA performs data distribution and task allocation according to DSP resources; and (3) executing bump platform SAR imaging processing based on multilevel motion compensation by utilizing the advantage of high flexibility of DSP programming, such as phase estimation, phase compensation, matrix transposition and the like. Compared with the existing scheme, the method effectively avoids the idleness and redundancy of hardware resources, and has higher resource utilization rate and lower power consumption and cost.
(3) The DSP terminal processes the INS-based motion compensation signal ss between the distance direction processing and the azimuth direction processingINSn,R(tr,ta;RS) The data is subjected to matrix transposition, and the matching of a data storage mode and a module processing mode is ensured, so that the bus occupancy rate of reading and writing data is greatly reduced, the data reading and writing efficiency is improved, and the real-time performance of system imaging is effectively improved.
Drawings
The invention is described in further detail below with reference to the figures and specific embodiments.
FIG. 1 is a flow chart of the real-time imaging processing method of the unmanned airborne SAR of the bumpy platform of the present invention;
FIG. 2 is an unmanned airborne SAR front side view stripe imaging geometry model;
FIG. 3 is a flow chart of an FPGA digital down conversion process;
FIG. 4 is a flow chart of the FPGA performing range pulse pressure processing; wherein, fig. 4(a) is a flow chart of direct sampling pulse pressure; FIG. 4(b) is a flow chart of declivity pulse pressure;
FIG. 5 is a schematic diagram of data distribution and task allocation by the FPGA to the DSP1 and the DSP 2;
FIG. 6 is a diagram of DSP matrix transposition;
FIG. 7 is a flow chart of a data motion compensation method combining DSP with MD idea;
FIG. 8 is a schematic diagram of dividing azimuth sub-apertures in the MD algorithm;
fig. 9 is a flowchart of the DSP executing PGA;
fig. 10 is an imaging result diagram of the tossing platform unmanned aerial vehicle-mounted SAR real-time imaging processing method in the invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to examples, but it will be understood by those skilled in the art that the following examples are only illustrative of the present invention and should not be construed as limiting the scope of the present invention.
The invention provides a real-time imaging processing method of an unmanned airborne SAR (synthetic aperture radar) of a bumpy platform, and a specific processing flow chart is shown in figure 1. The FPGA end executes digital down conversion, distance pulse pressure, azimuth pre-filtering and data distribution, then the DSP1 and the DSP2 execute INS-based motion compensation, matrix transposition, distance migration correction, data-based motion compensation, azimuth compression and PGA, and finally image splicing is completed at the DSP1 end and is sent to a system board through a PCIe interface. The present invention will be described in further detail with reference to the accompanying drawings.
Step 1, receiving and transmitting an LFM pulse signal by using an unmanned airborne SAR system to obtain an intermediate frequency echo real signal sr(tr) (ii) a Using analog-to-digital converter (ADC) pairs sr(tr) After the sampling and the quantization are carried out,obtaining discrete intermediate frequency echo real signal sr(k) FPGA end pair sr(k) Performing digital down-conversion (DDC) processing to generate zero-intermediate-frequency orthogonal dual-path signal sI(k) And sQ(k) And integrated as time domain echoes ss (t)r,ta;RS)=sI(k)+jsQ(k)。
Specifically, step 1 comprises the following substeps:
substep 1.1, referring to fig. 2, the unmanned airborne SAR operates in a front side view stripe mode, ideally with the platform flying at a horizontal velocity v along the Y-axis direction and an azimuth beam width θBWThe flying height is H. Point B is the scene center point, RSIs the center slant distance of the scene. The point P is any point distributed along the azimuth direction in the scene, and the distance between the point P and the point B is XnI.e. XnIs the distance between the center point of the scene and any point in the scene that is distributed along the azimuth direction. According to the geometric relationship, the instantaneous slope distance between the platform and the point target is obtained as follows:
Figure BDA0002360957910000061
wherein, taIs an azimuth slow time variable. Based on the instantaneous slope distance between the platform and the point target, an expression of the intermediate frequency echo real signal can be obtained:
Figure BDA0002360957910000062
wherein, trAs a function of distance and fast time, c denotes the speed of light, fIRepresenting the intermediate frequency, k, before ADC samplingrIndicating the tuning frequency of the LFM signal.
Substep 1.2, discrete intermediate frequency echo real signal sr(k) Comprises the following steps:
Figure BDA0002360957910000071
wherein k represents a discrete time variable, NaIndicating the number of azimuth sample points.
Substep 1.3, referring to fig. 3, the FPGA receives the discrete intermediate frequency real echo signal sr(k) Then storing the data into a Read Only Memory (ROM), and utilizing the floating point intellectual property core (IP core) to respectively correspond to cos (2 pi f)Ik △ t) and sin (2 π f)Ik △ t), wherein △ t represents the time interval between adjacent sampling points, then a low-pass filter f (k) is generated by means of an fdatool tool in MATLAB software and is preset in a ROM of the FPGA, the two signals are respectively shifted, multiplied and accumulated by using the f (k), the low-pass filtering is completed, and a zero intermediate frequency orthogonal two-way signal s is generatedI(k) And sQ(k) In that respect The above process can be expressed as:
Figure BDA0002360957910000072
wherein the content of the first and second substances,
Figure BDA0002360957910000073
which represents the calculation of the convolution,
Figure BDA0002360957910000074
indicating that a summation operation is performed from M-0 to M-1, where M indicates the number of total sample points and an increase in the summation variable M indicates a shift in the vector.
Substep 1.4, quadrature two-way signal s of zero intermediate frequencyI(k) And sQ(k) Respectively as the real part and the imaginary part of the complex signal to form a time domain echo ss (t)r,ta;RS) Comprises the following steps:
Figure BDA0002360957910000075
wherein, wr(tr) Representing a distance-time-domain window function, wa(ta) Representing the azimuth time domain window function, j is an imaginary unit, and λ is a wavelength.
Step 2, judging the bandwidth of the transmitting signal and the ADC sampling frequency f by the FPGA terminalsThe relation between the time domain echo ss (t) and the time domain echo t (t) is selected to be in a direct sampling mode or a de-skew moder,ta;RS) Performing distance direction pulse compression to obtain distanceOff-directional pulse compression signal ssPC,R(tr,ta;RS) (ii) a Where the subscript PC indicates the distance-wise pulse compression and the subscript R indicates that the data is continuous along the distance-wise (Range).
Specifically, when the system bandwidth is low, the sampling frequency of the ADC meets the Nyquist sampling theorem, and the FPGA end can obtain the range-wise pulse compression result ss by adopting a direct pulse pressure sampling modePC,R(tr,ta;RS) Wherein the subscript R indicates that the data is continuous along the distance direction (Range); when the system bandwidth is high, the maximum sampling frequency of the ADC cannot meet the Nyquist sampling theorem, and the FPGA end needs to obtain ss in a way of removing the oblique pulse pressurePC,R(tr,ta;RS) This approach equivalently reduces the system sampling frequency. Step 2 comprises the following substeps:
substep 2.1, the FPGA end adopts a frequency domain matching filtering mode to carry out distance pulse pressure, firstly, an IP core of Fourier transform (FFT) is called to lead time domain echo ss (t)r,ta;RS) And (3) transforming to a distance frequency domain to obtain a distance echo:
Ss(fr,ta;RS)=FFTR[ss(tr,ta;RS)]
wherein, FFTRRepresenting a distance fourier transform.
Substep 2.2, based on the transmission signal bandwidth BrAnd ADC sampling frequency fsThe relationship between the two is that the distance direction pulse compression form is selected, and the specific form is as follows:
(a) when B is presentr<fsAnd the FPGA executes direct sampling pulse pressure processing. According to the frequency modulation rate k of LFM signalrConstructing a frequency domain matched filtering reference function HPCAnd presetting the FPGA in a ROM of the FPGA:
Figure BDA0002360957910000081
wherein f isrRepresenting the distance versus frequency variation. Referring to FIG. 4(a), the multiplier is invoked to convert Ss (f)r,ta;RS) And HPCMultiplying; and (3) calling an IP core of inverse Fourier transform (IFFT) to transform the multiplication result to a two-dimensional time domain to obtain a range direction pulse compression signal:
ssPC,R(tr,ta;RS)=IFFTR[Ss(fr,ta;RS)·HPC]
wherein the IFFTRRepresenting the inverse fourier transform of the distance.
(b) When B is presentr≥fsAnd the FPGA executes the deskew pulse pressure processing. Constructing a deskew function S that eliminates Residual Video Phase (RVP) termsc(fi) And is preset in ROM of FPGA
Figure BDA0002360957910000091
Wherein f isiIs a baseband range frequency variable. Referring to FIG. 4(b), the multiplier is invoked to convert Ss (f)r,ta;RS) And Sc(fi) Multiplying, namely calling the IP core of IFFT to check the product result and perform inverse Fourier transform of distance direction, then calling a multiplier and the IP core of FFT to check the time domain signal and perform windowed Fourier transform to obtain a distance direction pulse compression signal:
ssPC,R(tr,ta;RS)=FFTR[wr(tr)·IFFTR[Ss(fr,ta;RS)·Sc(fi)]]
wherein, wr(tr) Representing a distance-time-domain window function, FFTRRepresenting a distance fourier transform.
Step 3, FPGA end-to-end distance direction pulse compression signal ssPC,R(tr,ta;RS) Performing azimuth pre-filtering to obtain an azimuth filtered signal ssPF,R(tr,ta;RS)。
Specifically, the step 3 is as follows: first, the Doppler bandwidth B is calculated according to the radar wavelength, the beam width and the flight speeda
Figure BDA0002360957910000092
Wherein D isaIs the antenna aperture, thetaBWIs the beam width, λ is the wavelength; subsequently, the low-pass filter coefficients are generated by means of the fdatool tool in MATLAB software and preset in the ROM of the FPGA, wherein the pass-band width of the low-pass filter is the Doppler bandwidth BaThe sampling frequency is the Pulse Repetition Frequency (PRF) of the system. In azimuth direction ssPC,R(tr,ta;RS) Performing a filtering operation, and storing the output result as a matrix, i.e. the azimuth-filtered signal ssPF,R(tr,ta;RS)ssPF,R(tr,ta;RS). The influence of most of moving targets is filtered through the azimuth pre-filtering treatment, so that the imaging effect is improved.
Step 4, the FPGA end enables the azimuth-filtered signal ssPF,R(tr,ta;RS)ssPF,R(tr,ta;RS) Partitioning along the distance direction to obtain a plurality of data blocks, wherein the number of overlapping points of adjacent data blocks is G, and each data block is sent to a DSP end; wherein each data block is denoted as ssn,R(tr,ta;RS) And n is the number of data blocks.
Specifically, referring to fig. 5, the FPGA performs task control and data distribution on a plurality of DSP chips. Will ssPF,R(tr,ta;RS) Divided into two data blocks along the distance direction, denoted as ss respectively1,R(tr,ta;RS) And ss2,R(tr,ta;RS) N is 1 and 2, and the number of overlapped points G of the two data blocks in the distance direction is 512; and distributing the data blocks to the DSP1 and the DSP2 through a serial Rapid I/O (SRIO) interface, and issuing an imaging task instruction.
Step 5, the DSP end obtains the phase error
Figure BDA0002360957910000101
Using said phase error
Figure BDA0002360957910000102
For ssn,R(tr,ta;RS) Performing INS-based motion compensation to obtain an INS-based motion compensation signal ssINSn,R(tr,ta;RS)。
Specifically, step 5 comprises the following substeps:
substep 5.1, the DSP terminal analyzes the SRIO frame according to the interrupt protocol and analyzes the eastern speed v in the geographic coordinate system transmitted by the INSeastNorth velocity vnorthVelocity vskyInformation, combined with track angle thetayawCalculating the along-track velocity v under the imaging coordinate systemalongVertical track velocity vcrossAnd a lifting speed veleva
Figure BDA0002360957910000103
Step 5.2, integrating inertial navigation speed information by the DSP end, and combining an ideal track to obtain a motion error of the platform along the LOS direction; conversion of motion error into phase error, ssn,R(tr,ta;RS) Performing INS-based motion compensation to obtain ssINSn,R(tr,ta;RS) Wherein n is 1, 2.
Specifically, for the three-dimensional velocity component (along-track velocity v) in the imaging coordinate systemalongVertical track velocity vcrossAnd a lifting speed veleva) Performing linear integration to obtain three-dimensional motion error e under the coordinate systemalong、 ecross、eelevaThen the motion error e in the LOS direction is calculated from the pitch angle β of the beamLOSAnd corresponding phase error
Figure BDA0002360957910000104
Figure BDA0002360957910000105
Substep 5.3. mixing
Figure BDA0002360957910000106
And ssn,R(tr,ta;RS) Multiplying to obtain an INS-based motion compensation signal:
Figure BDA0002360957910000107
step 6, the DSP end pair is used for motion compensation signal ss based on INSINSn,R(tr,ta;RS) Matrix transposition is carried out to obtain a transposed signal ssINSn,A(tr,ta;RS) (ii) a Where subscript a indicates that the data is continuous in the Azimuth direction (Azimuth).
Specifically, the steps 1 to 5 are all distance direction processing, and require data to be continuous along the distance direction. The subsequent processing is carried out estimation and compensation aiming at the azimuth signal, and in order to improve the data read-write speed and reduce the hardware resource loss, the ss is subjected toINSn,R(tr,ta;RS) A matrix transposition operation is performed. Referring to FIG. 6, the DSP performs matrix transposition to obtain ss that is continuous along the azimuth directionINSn,A(tr,ta;RS)。
Step 7, the DSP end contra-rotating signal ssINSn,A(tr,ta;RS) Sequentially carrying out distance Fourier transform and azimuth Fourier transform to obtain two-dimensional frequency domain echo signal SSINSn,A(fr,fa;RS) (ii) a Constructing a range migration correction function HRCMC(fr,fa;RS) Two-dimensional frequency domain echo signal SSINSn,A(fr,fa;RS) And HRCMC(fr,fa;RS) Multiplying and performing two-dimensional inverse Fourier transform to obtain range migration correction signal ssRCMCn,A(tr,ta;RS)。
In particular, the meterCalculating ssINSn,A(tr,ta;RS) The middle-order range bending amount and the high-order range migration amount are used as a phase generation range migration correction function HRCMC(fr,fa;RS) (ii) a Then to ssINSn,A(tr,ta;RS) Compensating to obtain range migration correction signal ssRCMCn,A(tr,ta;RS) Wherein n is 1, 2.
Step 7 comprises the following substeps:
substep 7.1, DSP port pair ssINSn,A(tr,ta;RS) Distance direction FFT is carried out to obtain a distance frequency domain expression
Figure BDA0002360957910000111
Wherein, Wr(fr) As a function of a distance frequency domain window, fcCarrier frequency of radar system, frRepresenting the distance versus frequency variation.
Substep 7.2, DSP port pair SsINSn,A(fr,ta;RS) Performing azimuth FFT to obtain a two-dimensional frequency domain echo signal:
Figure BDA0002360957910000121
wherein, Wa(fa) Representing the azimuth frequency domain window function, faRepresenting the azimuth frequency variation.
Substep 7.3, relating the second exponential term in the above equation to the variable frThe low-order craulin is unfolded:
Figure BDA0002360957910000122
from which f is foundrAnd faAnd constructing a range migration correction function:
Figure BDA0002360957910000123
substep 7.4, the DSP side will SSINSn,A(fr,fa;RS) And HRCMC(fr,fa;RS) Multiplying and performing two-dimensional IFFT to obtain a range migration correction signal:
ssRCMCn,A(tr,ta;RS)=IFFT2[SSINSn,A(fr,fa;RS)·HRCMC(fr,fa;RS)]
wherein the IFFT2[·]Representing a two-dimensional inverse fourier transform.
Step 8, the DSP end constructs a phase compensation function HMDCorrection of range migration signal ssRCMCn,A(tr,ta;RS) And HMDMultiplying to obtain a data-based motion compensated signal ssMDn,A(tr,ta;RS)。
Specifically, step 8 comprises the following substeps:
substep 8.1, referring to fig. 7, the DSP performs data-based motion compensation using the MD algorithm; referring to FIG. 8, the DSP maps the azimuth signal vector s (t)a) Divided into two sub-apertures, respectively denoted as s1(ta) And s2(ta) I.e. s1(ta) And s2(ta) Front and back subaperture signals, respectively:
Figure BDA0002360957910000131
wherein T represents the synthetic aperture time, kaRepresenting the azimuth modulation frequency, a (-) represents the signal envelope. DSP will s1(ta) And s2(ta) Transforming to azimuth frequency domain to obtain S1(fa) And S2(fa) I.e. S1(fa) And S2(fa) Respectively signals transformed into the azimuth frequency domain:
Figure BDA0002360957910000132
Wherein
Figure BDA0002360957910000133
Substeps 8.2, for S1(fa) And S2(fa) Correlation (Correlation) is performed to obtain the relative movement amount △ n between the two view images, and the estimated value of the Doppler modulation frequency is calculated
Figure BDA0002360957910000134
Figure BDA0002360957910000135
Where PRF denotes the pulse repetition frequency of the system.
Substep 8.2, estimation value of Doppler frequency modulation by DSP end
Figure BDA0002360957910000136
The phase error of the Doppler FM estimation is obtained by double integration and is recorded as
Figure BDA0002360957910000137
Figure BDA0002360957910000138
Wherein ^ integral ^ (·) represents double integral, dtaRepresenting time t along azimuthaAnd (6) performing integral operation.
Substep 8.3 phase error estimation from doppler frequency modulation
Figure BDA0002360957910000141
Construction of the phase Compensation function HMD
Figure BDA0002360957910000142
Substep 8.4, the DSP end will ssRCMCn,A(tr,ta;RS) And HMDMultiplying to obtain a data-based motion compensation signal:
ssMDn,A(tr,ta;RS)=ssRCMCn,A(tr,ta;RS)·HMD
step 9, the DSP port pair motion compensation signal ss based on dataMDn,A(tr,ta;RS) Performing direction Fourier transform to obtain a signal sS in a range-Doppler domainMDn,A(tr,fa;RS) (ii) a Constructing an orientation matching filter function, and adopting the orientation matching filter function to sSMDn,A(tr,fa;RS) Performing azimuth compression to obtain an azimuth compression signal ssACn,A(tr,ta)。
Specifically, the DSP end calculates the azimuth frequency modulation rate to generate an azimuth compression reference function, and the ss is subjected to the comparisonMDn,A(tr,ta;RS) Performing primary focusing to obtain ssACn,A(tr,ta) Wherein n is 1, 2. Step 9 comprises the following sub-steps:
substep 9.1, DSP port pair ssMDn,A(tr,ta;RS) And performing azimuth FFT to obtain an expression of the signals in the range-Doppler domain after estimation and compensation of the residual Doppler frequency modulation rate:
Figure BDA0002360957910000143
wherein sinc (·) represents sinc function, the first exponential term reflects the azimuth position of the target, and the second exponential term is azimuth frequency modulation term.
Substep 9.2, constructing an orientation-matched filter function based on the second exponential term in the above equation:
Figure BDA0002360957910000144
substep 9.3, DSP compares sSMDn,A(tr,fa;RS) And HAC(fa) Multiplying and performing an azimuth IFFT to obtain an azimuth compression signal ssACn,A(tr,ta):
Figure BDA0002360957910000151
Wherein the IFFTA[·]Representing an azimuthal inverse fourier transform.
Step 10, the DSP end obtains an estimation value of the azimuth phase error by utilizing a phase gradient self-focusing algorithm, and an azimuth compression signal ss is obtained according to the estimation value of the azimuth phase errorACn,A(tr,ta) Corrected to obtain accurate range block focusing signal ssPGAn,A(tr,ta)。
Specifically, PGA processing is performed for a high-order phase error in the image domain, and ss is processedACn,A(tr,ta) Further compensating the residual phase error to obtain a more accurate range-blocked focus signal ssPGAn,A(tr,ta) Wherein n is 1, 2.
Step 10 comprises the following substeps:
substep 10.1, referring to fig. 9, first the DSP takes out the azimuth compressed signal ssACn,A(tr,ta) The maximum amplitude of each range gate in the azimuth direction and its index value (azimuth sampling position) are determined for a number of range cells with strong energy in the matrix.
Substep 10.2, the saliency points in the image are all located at the centre of the image by a circumferential shift.
Substep 10.3, a segment of complex data centered at the maximum is truncated by windowing and the first derivative of the segment of complex data is calculated.
Substep 10.4, estimating a first derivative of the azimuth phase error, and integrating the first derivative to obtain an estimated value of the azimuth phase error;
and a substep 10.5 of correcting the azimuth signal using the estimate of the azimuth phase error. And repeating the above processes, wherein the window length is reduced by half every cycle until the judgment condition is met, namely the window length is reduced to 1-3 azimuth units, and then all phase errors are corrected. Since PGA processing takes the idea of transform domain, multiple azimuthal FFTs and IFFTs appear in fig. 9.
Step 11, obtaining accurate range block focusing signal ss from multiple DSP terminalsPGAn,A(tr,ta) And carrying out image splicing according to the number G of the overlapped points of the adjacent data blocks, and assembling into a complete SAR image.
Specifically, the DSP2 will ssPGA2,A(tr,ta) Sending to DSP1, DSP1 dividing ss according to the overlapping point number 512 of distance blocksPGA1,A(tr,ta) And ssPGA2,A(tr,ta) Splicing the images, and gathering the images into a complete SAR image; and then the DSP1 sends the complete SAR image to a system board through a PCIe interface, and the unmanned airborne SAR real-time imaging processing design is completed.
The improvement effect of the present invention can be further explained by the following actual measurement test.
And carrying out an SAR system on the unmanned aerial vehicle platform, and carrying out imaging test on the ground cooperation area. Since the unmanned platform is light in weight, the air flow in the air will cause the platform to jolt violently. The unmanned aerial vehicle-mounted SAR real-time imaging processing software framework provided by the invention is loaded on a bumpy unmanned aerial vehicle platform to carry out SAR imaging processing, and the obtained imaging result is shown in figure 10.
By comparing the two images, the software architecture provided by the invention can be summarized to bring the following improvements to the SAR imaging of the bumpy platform.
(1) The invention eliminates defocusing and tailing phenomena caused by the motion error of the jolting flying platform;
(2) according to the invention, through good focusing on the special display points, the number of pixel value saturation points in the image is effectively reduced, so that the quantization coefficient is controlled in a normal range, and the problem of integral darkening of the image is solved;
(3) the method carries out estimation and compensation aiming at the influence of the motion error on the beam pointing, and effectively inhibits the influence of the out-of-band clutter on the imaging algorithm.
The motion compensation method described by the invention can well focus the irradiation area, and no tailing phenomenon occurs at the periphery of the strong scatterer. The real-time processing software framework designed by the invention completes the imaging task quickly and efficiently, meets the requirement of the real-time performance of the system, and can continuously detect and image a large scene.
Although the present invention has been described in detail in this specification with reference to specific embodiments and illustrative embodiments, it will be apparent to those skilled in the art that modifications and improvements can be made thereto based on the present invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (7)

1. A bumpy platform unmanned aerial vehicle carries SAR real-time imaging processing method is characterized by comprising the following steps:
step 1, receiving and transmitting an LFM pulse signal by using an unmanned airborne SAR system to obtain an intermediate frequency echo real signal sr(tr) (ii) a Using analog-to-digital converter pairs sr(tr) After sampling and quantization, a discrete intermediate frequency echo real signal s is obtainedr(k) FPGA end pair sr(k) Performing digital down-conversion processing to generate zero intermediate frequency orthogonal dual-path signal sI(k) And sQ(k) And integrated as time domain echoes ss (t)r,ta;RS)=sI(k)+jsQ(k) (ii) a Wherein, trAs a fast time variable of distance, taFor azimuthal slow time variation, RSThe center slant distance of the scene is shown, and j is an imaginary number unit;
step 2, the FPGA end judges the bandwidth B of the transmitting signalrAnd ADC sampling frequency fsThe relation between the time domain echo ss (t) and the time domain echo t (t) is selected to be in a direct sampling mode or a de-skew moder,ta;RS) Performing a radial pulseCompressing to obtain a range-wise pulse compression signal ssPC,R(tr,ta;RS);
Step 3, FPGA end-to-end distance direction pulse compression signal ssPC,R(tr,ta;RS) Performing azimuth pre-filtering to obtain an azimuth filtered signal ssPF,R(tr,ta;RS);
Step 4, the FPGA end enables the azimuth-filtered signal ssPF,R(tr,ta;RS) Partitioning along the distance direction to obtain a plurality of data blocks, wherein the number of overlapping points of adjacent data blocks is G, and each data block is sent to a DSP end; wherein each data block is denoted as ssn,R(tr,ta;RS) N is the number of data blocks;
step 5, the DSP end obtains the phase error
Figure FDA0002360957900000011
Using said phase error
Figure FDA0002360957900000012
For ssn,R(tr,ta;RS) Performing INS-based motion compensation to obtain an INS-based motion compensation signal ssINSn,R(tr,ta;RS);
Step 6, the DSP end pair is used for motion compensation signal ss based on INSINSn,R(tr,ta;RS) Matrix transposition is carried out to obtain a transposed signal ssINSn,A(tr,ta;RS);
Step 7, the DSP end contra-rotating signal ssINSn,A(tr,ta;RS) Sequentially carrying out distance Fourier transform and azimuth Fourier transform to obtain two-dimensional frequency domain echo signal SSINSn,A(fr,fa;RS) (ii) a Constructing a range migration correction function HRCMC(fr,fa;RS) Two-dimensional frequency domain echo signal SSINSn,A(fr,fa;RS) And HRCMC(fr,fa;RS) Multiplying and performing two-dimensional inverse Fourier transform to obtain range migration correction signal ssRCMCn,A(tr,ta;RS);
Step 8, the DSP end constructs a phase compensation function HMDCorrection of range migration signal ssRCMCn,A(tr,ta;RS) And HMDMultiplying to obtain a data-based motion compensated signal ssMDn,A(tr,ta;RS);
Step 9, the DSP port pair motion compensation signal ss based on dataMDn,A(tr,ta;RS) Performing direction Fourier transform to obtain a signal sS in a range-Doppler domainMDn,A(tr,fa;RS) (ii) a Constructing an orientation matching filter function, and adopting the orientation matching filter function to sSMDn,A(tr,fa;RS) Performing azimuth compression to obtain an azimuth compression signal ssACn,A(tr,ta);
Step 10, the DSP end obtains an estimation value of the azimuth phase error by utilizing a phase gradient self-focusing algorithm, and an azimuth compression signal ss is obtained according to the estimation value of the azimuth phase errorACn,A(tr,ta) Corrected to obtain accurate range block focusing signal ssPGAn,A(tr,ta);
Step 11, obtaining accurate range block focusing signal ss from multiple DSP terminalsPGAn,A(tr,ta) And carrying out image splicing according to the number G of the overlapped points of the adjacent data blocks, and assembling into a complete SAR image.
2. The toss platform unmanned aerial vehicle SAR real-time imaging processing method according to claim 1, characterized in that step 1 comprises the following substeps:
substep 1.1, intermediate frequency echo real signal sr(tr) Comprises the following steps:
Figure FDA0002360957900000021
wherein, trAs a function of distance and fast time, c denotes the speed of light, fIRepresenting the intermediate frequency, k, before ADC samplingrIndicating the modulation frequency of the LFM signal;
Figure FDA0002360957900000022
v is horizontal velocity, XnThe distance between the center point of the scene and any point distributed along the azimuth direction in the scene;
substep 1.2, discrete intermediate frequency echo real signal sr(k) Comprises the following steps:
Figure FDA0002360957900000023
wherein k represents a discrete time variable, NaRepresenting the number of azimuth sampling points;
substep 1.3, discrete intermediate frequency echo real signal sr(k) Respectively with cos (2 π f)Ik △ t) and sin (2 π f)Ik △ t), then the two signals are respectively shifted, multiplied and accumulated by using low-pass filters f (k), and a zero intermediate frequency orthogonal two-path signal s is generatedI(k) And sQ(k):
Figure FDA0002360957900000031
Where △ t represents the time interval between adjacent sample points,
Figure FDA0002360957900000032
which represents the calculation of the convolution,
Figure FDA0002360957900000033
means that a summation operation is performed from M to 0 to M-1, M representing the number of total sampling points, and an increase in the summation variable M representing a shift of the vector;
substep 1.4, quadrature two-way signal s of zero intermediate frequencyI(k) And sQ(k) Respectively doForm time domain echo ss (t) for real part and imaginary part of complex signalr,ta;RS) Comprises the following steps:
Figure FDA0002360957900000034
wherein, wr(tr) Representing a distance-time-domain window function, wa(ta) Representing an azimuthal time domain window function, and λ is the wavelength.
3. The toss platform unmanned aerial vehicle SAR real-time imaging processing method according to claim 1, wherein the step 2 specifically comprises the following substeps:
substep 2.1, FPGA end-to-time domain echo ss (t)r,ta;RS) Fourier transform is carried out to a distance frequency domain to obtain a distance echo:
Ss(fr,ta;RS)=FFTR[ss(tr,ta;RS)]
wherein, FFTRRepresents a distance-to-fourier transform;
substep 2.2, based on the transmission signal bandwidth BrAnd ADC sampling frequency fsThe relationship between the distance and the pulse compression is selected as follows:
when B is presentr<fsThen, the FPGA executes direct sampling pulse pressure processing; the method specifically comprises the following steps: according to the frequency modulation rate k of LFM signalrConstructing a frequency domain matched filtering reference function HPC
Figure FDA0002360957900000041
Wherein f isrRepresenting a distance direction frequency variable;
will Ss (f)r,ta;RS) And HPCMultiplying, and performing inverse Fourier transform on the multiplied result to obtain a distance direction pulse compression signal:
ssPC,R(tr,ta;RS)=IFFTR[Ss(fr,ta;RS)·HPC]
wherein the IFFTRRepresenting an inverse fourier transform of the distance;
when B is presentr≥fsWhen the pulse voltage is not detected, the FPGA executes oblique pulse voltage removing processing; the method specifically comprises the following steps: constructing a declivity function Sc(fi):
Figure FDA0002360957900000042
Wherein f isiIs a baseband distance frequency variable;
will Ss (f)r,ta;RS) And Sc(fi) Multiplying, namely sequentially performing distance inverse Fourier transform and windowed Fourier transform on the multiplied result to obtain a distance pulse compression signal:
ssPC,R(tr,ta;RS)=FFTR[wr(tr)·IFFTR[Ss(fr,ta;RS)·Sc(fi)]]
wherein, wr(tr) Representing a distance-time-domain window function, FFTRRepresenting a distance fourier transform.
4. The toss platform unmanned aerial vehicle SAR real-time imaging processing method according to claim 1, characterized in that step 5 comprises the following substeps:
substep 5.1, the DSP terminal analyzes the east velocity v under the geographic coordinate system transmitted by the INSeastNorth velocity vnorthVelocity vskyInformation, combined with track angle thetayawCalculating the along-track velocity v under the imaging coordinate systemalongVertical track velocity vcrossAnd a lifting speed veleva
Figure FDA0002360957900000051
Substep 5.2, DSP end pair imaging stationTrack-following velocity v under a coordinate systemalongVertical track velocity vcrossAnd a lifting speed velevaPerforming linear integration to obtain three-dimensional motion error e under the coordinate systemalong、ecross、eelevaCalculating the motion error e in the LOS direction according to the pitch angle β of the beamLOSAnd corresponding phase error
Figure FDA0002360957900000052
Figure FDA0002360957900000053
Wherein λ is the wavelength;
substep 5.3. mixing
Figure FDA0002360957900000054
And ssn,R(tr,ta;RS) Multiplying to obtain an INS-based motion compensation signal:
Figure FDA0002360957900000055
5. the toss platform unmanned aerial vehicle SAR real-time imaging processing method according to claim 2, characterized in that step 7 comprises the following substeps:
substep 7.1, DSP port pair ssINSn,A(tr,ta;RS) Performing distance-to-Fourier transform to obtain a distance frequency domain expression:
Figure FDA0002360957900000056
wherein, Wr(fr) As a function of the distance frequency domain window, wa(ta) Representing the azimuthal time-domain window function, fcCarrier frequency of radar system, frRepresenting a distance direction frequency variable;
substep 7.2, DSP port pair SsINSn,A(fr,ta;RS) Carrying out azimuth Fourier change to obtain a two-dimensional frequency domain echo signal:
Figure FDA0002360957900000061
wherein, Wa(fa) Representing the azimuth frequency domain window function, faRepresenting an azimuth frequency variable;
substep 7.3, relating the second exponential term in the above equation to the variable frLow-order mullin expansion:
Figure FDA0002360957900000062
from which f is foundrAnd faAnd constructing a range migration correction function:
Figure FDA0002360957900000063
substep 7.4, the DSP side will SSINSn,A(fr,fa;RS) And HRCMC(fr,fa;RS) Multiplying and performing two-dimensional inverse Fourier transform to obtain a range migration correction signal:
ssRCMCn,A(tr,ta;RS)=IFFT2[SSINSn,A(fr,fa;RS)·HRCMC(fr,fa;RS)]
wherein the IFFT2[·]Representing a two-dimensional inverse fourier transform.
6. The toss platform unmanned aerial vehicle SAR real-time imaging processing method according to claim 1, characterized in that step 8 comprises the following substeps:
substep 8.1, DSP end will orient signal vector s (t)a) Is divided into a front part and a rear partSub-apertures, respectively denoted as s1(ta) And s2(ta):
Figure FDA0002360957900000071
Wherein T represents the synthetic aperture time, kaRepresenting the azimuth modulation frequency, a (-) represents the signal envelope; DSP will s1(ta) And s2(ta) Transforming to azimuth frequency domain to obtain S1(fa) And S2(fa):
Figure FDA0002360957900000072
Wherein
Figure FDA0002360957900000073
Wherein f isaRepresenting an azimuth frequency variable;
substeps 8.2, for S1(fa) And S2(fa) Performing correlation operation to obtain relative shift amount △ n between two view images, and calculating Doppler frequency modulation estimated value according to relative shift amount △ n
Figure FDA0002360957900000074
Figure FDA0002360957900000075
Wherein N isaThe number of sampling points in the azimuth direction is represented, and the PRF represents the pulse repetition frequency of the system;
substep 8.2 estimation of the Doppler modulation frequency
Figure FDA0002360957900000076
The phase error of the Doppler FM estimation is obtained by double integration and is recorded as
Figure FDA0002360957900000077
Figure FDA0002360957900000078
Wherein ^ integral ^ (·) represents double integral, dtaRepresenting time t along azimuthaPerforming integral operation;
substep 8.3 phase error estimation from doppler frequency modulation
Figure FDA0002360957900000079
Construction of the phase Compensation function HMD
Figure FDA0002360957900000081
Substep 8.4, let ssRCMCn,A(tr,ta;RS) And HMDMultiplying to obtain a data-based motion compensated signal:
ssMDn,A(tr,ta;RS)=ssRCMCn,A(tr,ta;RS)·HMD
7. the toss platform unmanned aerial vehicle SAR real-time imaging processing method according to claim 2, characterized in that step 9 comprises the following substeps:
substep 9.1, DSP port pair ssMDn,A(tr,ta;RS) And (3) carrying out azimuth Fourier change to obtain an expression of signals in a range Doppler domain after estimation and compensation of residual Doppler frequency modulation rate:
Figure FDA0002360957900000082
wherein sinc (·) represents sinc function, the first exponential term reflects the azimuth position of the target, and the second exponential term reflects the azimuth position of the targetThe plurality of terms are azimuth frequency modulation terms; wa(fa) Representing the azimuth frequency domain window function, faRepresenting the azimuthal frequency variation, fcIs the carrier frequency of the radar system;
substep 9.2, constructing an orientation-matched filter function based on the second exponential term in the above equation:
Figure FDA0002360957900000083
substep 9.3. converting sSMDn,A(tr,fa;RS) And HAC(fa) Multiplying and performing azimuth inverse Fourier transform to obtain an azimuth compression signal ssACn,A(tr,ta):
Figure FDA0002360957900000084
Wherein the IFFTA[·]Representing an azimuthal inverse fourier transform.
CN202010021589.9A 2020-01-09 2020-01-09 Real-time imaging processing method for unmanned aerial vehicle SAR (synthetic aperture radar) of bumpy platform Active CN111190181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010021589.9A CN111190181B (en) 2020-01-09 2020-01-09 Real-time imaging processing method for unmanned aerial vehicle SAR (synthetic aperture radar) of bumpy platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010021589.9A CN111190181B (en) 2020-01-09 2020-01-09 Real-time imaging processing method for unmanned aerial vehicle SAR (synthetic aperture radar) of bumpy platform

Publications (2)

Publication Number Publication Date
CN111190181A true CN111190181A (en) 2020-05-22
CN111190181B CN111190181B (en) 2022-03-04

Family

ID=70708581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010021589.9A Active CN111190181B (en) 2020-01-09 2020-01-09 Real-time imaging processing method for unmanned aerial vehicle SAR (synthetic aperture radar) of bumpy platform

Country Status (1)

Country Link
CN (1) CN111190181B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111929681A (en) * 2020-06-24 2020-11-13 苏州理工雷科传感技术有限公司 Real-time imaging processing system based on light small-sized unmanned aerial vehicle carries SAR
CN112764029A (en) * 2020-12-16 2021-05-07 北京无线电测量研究所 SAR real-time imaging realization method and device based on GPU
CN112986997A (en) * 2021-04-09 2021-06-18 中国科学院空天信息创新研究院 Unmanned aerial vehicle-mounted SAR real-time imaging processing method and device and electronic equipment
WO2023015623A1 (en) * 2021-08-13 2023-02-16 复旦大学 Segmented aperture imaging and positioning method of multi-rotor unmanned aerial vehicle-borne synthetic aperture radar
CN116645496A (en) * 2023-05-23 2023-08-25 北京理工大学 Dynamic look-around splicing and stabilizing method for trailer based on grid deformation
CN116645496B (en) * 2023-05-23 2024-07-05 北京理工大学 Dynamic look-around splicing and stabilizing method for trailer based on grid deformation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007437A (en) * 2014-05-21 2014-08-27 西安电子科技大学 SAR real-time imaging processing method based on FPGA and multiple DSPs
CN104777479A (en) * 2015-05-05 2015-07-15 西安电子科技大学 Front-side-looking SAR real-time imaging method based on multi-core DSP
EP3144702A1 (en) * 2015-09-17 2017-03-22 Institute of Electronics, Chinese Academy of Sciences Method and device for synthethic aperture radar imaging based on non-linear frequency modulation signal
CN110146857A (en) * 2019-05-17 2019-08-20 西安电子科技大学 One kind is jolted platform SAR three-dimensional motion error estimation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007437A (en) * 2014-05-21 2014-08-27 西安电子科技大学 SAR real-time imaging processing method based on FPGA and multiple DSPs
CN104777479A (en) * 2015-05-05 2015-07-15 西安电子科技大学 Front-side-looking SAR real-time imaging method based on multi-core DSP
EP3144702A1 (en) * 2015-09-17 2017-03-22 Institute of Electronics, Chinese Academy of Sciences Method and device for synthethic aperture radar imaging based on non-linear frequency modulation signal
CN110146857A (en) * 2019-05-17 2019-08-20 西安电子科技大学 One kind is jolted platform SAR three-dimensional motion error estimation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周峰等: "一种机载大斜视SAR运动补偿方法", 《电子学报》 *
周芳等: "机载高分辨聚束式SAR实时成像处理系统的FPGA实现", 《电子与信息学报》 *
陈文平: "弹载SAR制导高效实时成像处理设计", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111929681A (en) * 2020-06-24 2020-11-13 苏州理工雷科传感技术有限公司 Real-time imaging processing system based on light small-sized unmanned aerial vehicle carries SAR
CN111929681B (en) * 2020-06-24 2022-03-25 苏州理工雷科传感技术有限公司 Real-time imaging processing system based on light small-sized unmanned aerial vehicle carries SAR
CN112764029A (en) * 2020-12-16 2021-05-07 北京无线电测量研究所 SAR real-time imaging realization method and device based on GPU
CN112764029B (en) * 2020-12-16 2024-03-22 北京无线电测量研究所 SAR real-time imaging realization method and device based on GPU
CN112986997A (en) * 2021-04-09 2021-06-18 中国科学院空天信息创新研究院 Unmanned aerial vehicle-mounted SAR real-time imaging processing method and device and electronic equipment
WO2023015623A1 (en) * 2021-08-13 2023-02-16 复旦大学 Segmented aperture imaging and positioning method of multi-rotor unmanned aerial vehicle-borne synthetic aperture radar
CN116645496A (en) * 2023-05-23 2023-08-25 北京理工大学 Dynamic look-around splicing and stabilizing method for trailer based on grid deformation
CN116645496B (en) * 2023-05-23 2024-07-05 北京理工大学 Dynamic look-around splicing and stabilizing method for trailer based on grid deformation

Also Published As

Publication number Publication date
CN111190181B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN111190181B (en) Real-time imaging processing method for unmanned aerial vehicle SAR (synthetic aperture radar) of bumpy platform
CN110095775B (en) Hybrid coordinate system-based bump platform SAR (synthetic Aperture Radar) rapid time domain imaging method
WO2023015623A1 (en) Segmented aperture imaging and positioning method of multi-rotor unmanned aerial vehicle-borne synthetic aperture radar
CN111856461B (en) Improved PFA-based bunching SAR imaging method and DSP implementation thereof
CN111443339A (en) Bistatic SAR space-variant correction imaging method, device, equipment and storage medium
CN108710111B (en) Two-dimensional space-variant correction method for airborne bistatic forward-looking SAR azimuth phase
CN108427115B (en) Method for quickly estimating moving target parameters by synthetic aperture radar
CN110515050B (en) Satellite-borne SAR real-time echo simulator based on GPU
CN108535724B (en) Moving target focusing method based on keystone transformation and integral quadratic function
CN114545411B (en) Polar coordinate format multimode high-resolution SAR imaging method based on engineering realization
CN104597447A (en) Improved sub-aperture SAR chirp scaling Omega-K imaging method
CN109358328B (en) Polar coordinate format imaging method of bistatic forward-looking SAR (synthetic aperture radar) of maneuvering platform
CN110412587B (en) Deconvolution-based downward-looking synthetic aperture three-dimensional imaging method and system
CN114384520B (en) Method for realizing refined radar imaging of sea surface ship by using maneuvering platform
CN111880180A (en) Self-focusing method for high-resolution moving ship SAR imaging
CN106950565A (en) Space-borne SAR Imaging jitter compensation method, imaging method
CN113589285A (en) Aircraft SAR real-time imaging method
CN108957452A (en) A kind of adaptive FFBP imaging method of synthetic aperture radar
CN111999734A (en) Broadband strabismus bunching SAR two-step imaging method and system
CN111208515B (en) SAR motion compensation method based on two-dimensional nonlinear mapping
CN114325704B (en) Rapid time domain imaging method of synthetic aperture radar based on wave number spectrum stitching
CN115877382A (en) Motion error estimation method based on adjacent pulse transformation difference of frequency modulated continuous wave
CN109343056B (en) RD imaging method and device for nonlinear frequency modulation SAR
CN109143235B (en) Ground moving target detection method for double-base forward-looking synthetic aperture radar
CN111257874A (en) PFA FPGA parallel implementation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant