CN106526593B - Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR - Google Patents

Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR Download PDF

Info

Publication number
CN106526593B
CN106526593B CN201611180810.5A CN201611180810A CN106526593B CN 106526593 B CN106526593 B CN 106526593B CN 201611180810 A CN201611180810 A CN 201611180810A CN 106526593 B CN106526593 B CN 106526593B
Authority
CN
China
Prior art keywords
coordinate
corner reflector
point
pixel
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611180810.5A
Other languages
Chinese (zh)
Other versions
CN106526593A (en
Inventor
唐新明
高小明
李涛
陈乾福
陈卫南
吴丹芹
张文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ministry of Natural Resources Land Satellite Remote Sensing Application Center
Original Assignee
SATELLITE SURVEYING AND MAPPING APPLICATION CENTER NASG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SATELLITE SURVEYING AND MAPPING APPLICATION CENTER NASG filed Critical SATELLITE SURVEYING AND MAPPING APPLICATION CENTER NASG
Priority to CN201611180810.5A priority Critical patent/CN106526593B/en
Publication of CN106526593A publication Critical patent/CN106526593A/en
Application granted granted Critical
Publication of CN106526593B publication Critical patent/CN106526593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • G01S13/9023SAR image post-processing techniques combined with interferometric techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a kind of sub-pixel-level corner reflector automatic positioning methods based on the tight imaging model of SAR, it specifically includes acquisition initial data, the initial pixel coordinate of calculating corner reflector point, the Pixel-level coordinate of resolving corner reflector point and calibration excluding gross error is measured by interferometric parameter, obtain the sub-pixel-level coordinate of accurate corner reflector point.The present invention has taken into account the image feature and geometrical characteristic of corner reflector, automation sub-pixel-level corner reflector can be achieved to extract, the method increase point extraction efficiencies, while ensure that the precision of position coordinate, realize the extraction corner reflector point coordinate of sub-pixel-level automation.

Description

Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR
Technical field
The present invention relates to radar interference processing technology field, in particular to a kind of sub- picture based on the tight imaging model of SAR Plain grade corner reflector automatic positioning method.
Background technique
Synthetic aperture radar (SAR) technology is born in nineteen fifty-one, it is by collecting and recording target, background in microwave band Interior electromagenetic wave radiation, scattering energy extract the useful of identification target object or phenomenon using signal processing technology processing and sorting Information is widely used in surveying and drawing terrain and its features, detection ocean, detection earth resource and reconnaissance and surveillance military target etc..Corner reflection Device (CR) has the characteristics that high reflection echo, stiff stability in Synthetic Aperture Radar images, has been applied to the geometry of SAR image In correction and synthetic aperture radar interferometry (InSAR) calibration.
The selection such as corner reflector face, design, machining tolerance, angle are concentrated mainly on to the design work of corner reflector at present Error, incidence angle alignment error etc. are spent, but is related to the less of automatic positioning to corner reflector, especially in the prior art not There is the automatic positioning of the reflector of sub-pixel-level.Existing reflector location technology is mostly using corner reflector point with stronger Echo-signal and surrounding atural object have relatively weak echo-signal characteristic, using a certain size template window go detection, With image, corresponding related coefficient is calculated, the high point of related coefficient is the position where corner reflector, and this method calculates speed Degree is very fast, however its detection accuracy can only arrive Pixel-level.In addition, there are also identify angle based on the method for extracting PS point in the prior art Reflector points calculate coherence factor around corner reflector point in certain window, amplitude deviation value, and there is high signal-to-noise ratio point to be For corner reflector point position, this method is better than correlation coefficient process, and precision is can be controlled within 1-2 pixel, however image number Easily tens of scapes, cost are excessively high;In addition, there are also the corner reflector extracting method for being based on signal-to-noise ratio (SCR), the party in the prior art Method estimates the peak excursion of initial alignment first, determines the approximate range of triplane target, then draws determining corner reflection The SCR value figure of device target zone, the maximum point of SCR value is triplane target.However, these three methods of the prior art by Quality evaluation and mode is checked in lacking, therefore the quality of positioning result can not be assessed, once there are rough error, in high-precision In mapping, biggish resolution error can be introduced.
Influence of the SAR image vulnerable to speckle noise, if used template window and practical corner reflector image feature It is inconsistent, then identify, the precision that detects it is not high;In addition, position, incidence angle, processing size etc. that corner reflector is put all can be right SAR image feature generates apparent influence, for example the incidence angle of radar sensor deviates ideal position, then three faces of corner reflector Corner effect is unobvious, and the corner reflector feature actually obtained is unpredictable, and existing detection corner reflector algorithm can not be known Not, it either still goes to extract position using the method for PS point using template matching method, detection accuracy is not also high, therefore Without versatility, and positioning accuracy is unable to reach the precision of sub-pixel-level.In addition, extracting corner reflector using template window Point can not effectively provide the extraction accuracy of corner reflector.Corner reflector point identification to or not, precision Gao Bugao, Wu Faping Valence.Interference of certain corner reflectors by strong backscattering echo signal around can easily lead to positioning mistake, to influence Using effect of corner reflector during high-acruracy survey calibration.
It is, therefore, desirable to provide a kind of novel reflector automatic positioning method that can overcome disadvantages mentioned above.
Summary of the invention
The present invention proposes a kind of sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR, this hair It is bright use strict SAR imaging model carry out corner reflector automatically extract, to avoid in artificial participation process poor efficiency and Low precision problem.Meanwhile the defect of intuitive quality evaluating method, this hair can not be provided for corner reflector automatic Extraction Algorithm It is bright that point-by-point error is provided, and can excluding gross error point, improve the robustness that point extracts result, can fully eliminate due to existing skill One or more problem caused by the limitation and defect of art.
Additional advantages, objects, and features of the invention, a part will be set forth in the following description, and another portion Divide will be apparent or from implementation of the invention by the investigation to following explanation for those of ordinary skill in the art It acquires.The present invention can be realized and obtained by the structure particularly pointed out in the specification and claims of text and attached drawing Objects and advantages.
The present invention provides a kind of sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR, Be characterized in that, the method specifically includes the following steps:
S1 acquires initial data, and the initial data includes coordinate, main image and the main image of corner reflector point Imaging parameters;
S2 constructs the tight imaging model of SAR, and the tight imaging model of SAR based on building using the initial data of acquisition Calculate the initial pixel coordinate (x of all corner reflector points0,y0), obtain the set of the initial pixel coordinate of all corner reflector points {Pinit};
S3, the initial pixel coordinate set { P based on the step S2 corner reflector point obtainedinit, it is calculated using peak swing Method resolves Pixel-level coordinate of all corner reflector points in main image, and obtains picture of all corner reflector points in main image Plain grade coordinate set { (Ppix)};
S4, Pixel-level the coordinate set { (P of diagonal reflector pointspix) in all corner reflector points Pixel-level coordinate into The measurement verification of row interferometric parameter, obtains the sub-pixel-level coordinate of corner reflector point.
The present invention also provides a kind of computer-readable medium, there are computer program, the computer program in medium For executing method of the present invention.
The image feature and geometrical characteristic that the present invention has taken into account corner reflector are, it can be achieved that automation sub-pixel-level corner reflection Device extracts, and the method increase point extraction efficiencies, while ensure that the precision of position coordinate, realizes sub-pixel-level automation Extraction corner reflector point coordinate.
Detailed description of the invention
Fig. 1 is to be automatically positioned according to the sub-pixel-level corner reflector based on the tight imaging model of SAR of the embodiment of the present invention The flow chart of method.
Fig. 2 is according to landform three-dimensional reconstruction geometrical relationship under the earth's core rectangular coordinate system in space of the embodiment of the present invention.
Specific embodiment
With reference to the accompanying drawings to invention is more fully described, wherein illustrating exemplary embodiment of the present invention.
The tight imaging model of SAR utilizes the three-dimensional of range equation, Doppler equation and interferometric phase equation solution target point Coordinate describes the relationship between interference SAR fundamental measurement value and ground target point position, has merged related ground target position All information, be generally acknowledged tight imaging model.Sub-pixel-level angle proposed by the present invention based on the tight imaging model of SAR is anti- Emitter automatic positioning method can comprehensively consider the back scattering feature and interference geometrical characteristic of image, obtain corner reflector Be accurately positioned as a result, and provide the position error of each corner reflector, auxiliary carries out the screening of corner reflector, is further ensured that dry Relate to the validity of measurement calibration scheme.
As shown in Figure 1, the sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR, can realize The high-precision automatic of corner reflector extracts.This method specifically includes the following steps:
Step S1, acquires initial data, the initial data include the coordinate of corner reflector point, main image, from image with And the main image and the imaging parameters from image.
According to an embodiment of the invention, can be adapted for any spaceborne/airborne imagery;Preferably, implementation case of the invention Example is by taking airborne Ka wave band interference SAR image as an example, and but it is not limited to this.Data source is in the space flight Ke Gong Second Academy, flight time On March 22nd, 2015, flight range are Shaanxi Province Yan Liangshi.Preferably, the present invention is by taking two scape images in air strips as an example.It is led It is image, as shown in table 1 from video imaging parameter.It is well known that in spaceborne/airborne SAR image Data processing, in order to obtain height The data processed result of precision is needed using control point data purification SAR treatment process, therefore often through laying trihedral angle angle Reflector points come stablized, be reliable, easy to measure control point coordinates, in subsequent statement use " corner reflector point " table Up to " control point ", the corner reflector point coordinate format provided is indicated in the form of the most commonly used longitude, latitude and elevation, is mapping The most commonly used coordinate expression-form of remote sensing fields.In aircraft flight, at the same laid corner reflector and using GPS into Measurement of coordinates is gone, GPS measures that coordinate is as shown in table 2, and corner reflector point coordinate includes: the name that first row indicates corner reflector point Claim, secondary series indicates that longitude, third column indicate that latitude and the 4th column indicate elevation information.
The master and slave video imaging parameter of table 1
The geodetic coordinates of 2 corner reflector point of table
Step S2 constructs the tight imaging model of SAR using the initial data of acquisition, and the SAR based on building is tightly imaged Model calculates the initial pixel coordinate (x of all corner reflector points0,y0), obtain the initial pixel coordinate of all corner reflector points Gather { Pinit};Below with the main image data instance for acquisition.
Step S2 relates generally to two parts, and first part is that the geographical coordinate of corner reflector point is transformed into the earth's core space is straight Angular coordinate, second part are the initial pixel coordinates for obtaining corner reflector point, and the expression of this coordinate is retouched under radar fix system It states, describes image space coordinate, the initial pixel coordinate of corner reflector point is the basis for carrying out subsequent coordinates and refining.Initially The calculation accuracy of pixel coordinate require be not it is very high, in general, corner reflector positioning accuracy can achieve dozens of pixel ?.Oblique distance-Doppler equation is generally used in solution process.It describes oblique distance in imaging process, Doppler, the earth The geometrical relationship of ellipsoid is the tight imaging model of SAR.
Specifically, step S2 specifically includes following sub-step:
The geodetic coordinates of the corner reflector point is transformed into the earth's core rectangular space coordinate by step S2.1;
The geodetic coordinates of corner reflector point is the ground point geographical coordinate of field measurement personnel actual measurement, it is transformed into the earth's core Rectangular space coordinate is because in subsequent processing, and many processing steps are completed under the rectangular coordinate system in space of the earth's core, because This needs to carry out coordinate conversion.
Earth coordinates are the coordinate informations that a point is described using longitude and latitude height, and wherein geodetic longitude L was ground point The angle of ellipsoid meridian plane and Greenwich Observatory meridian plane, geodetic latitude B be ground point ellipsoid normal and ellipsoid it is red The angle in road face, geodetic height H are distance of the ground point along ellipsoid normal to ground ellipsoid.And in the tight imaging model of SAR, It is generally adopted by the earth's core rectangular coordinate system in space, coordinate origin is overlapped with earth centroid, and Z axis is directed toward the earth arctic, and X-axis refers to To the average angle point from roofing and terrestrial equator of Greenwich, Y-axis constitutes right-handed coordinate system perpendicular to XOZ plane.It can be seen that two sit The definition for marking system is entirely different, generally requires geodetic coordinates being converted to the earth's core rectangular space coordinate, so as in same coordinate frame It is calculated under frame.
Specifically, the geodetic coordinates of corner reflector point is converted to the earth's core rectangular space coordinate according to following formula (1):
Wherein, (XP,YP,ZP) be corner reflector point the earth's core rectangular space coordinate, N is radius of curvature in prime vertical, (B, L, H) be respectively corner reflector point geodetic coordinates latitude, longitude and geodetic height, e be reference ellipsoid the first eccentricity.
Illustrate that the geodetic coordinates by corner reflector point is converted to the earth's core space right-angle seat by taking corner reflector point D002 as an example Target process.Be by the geodetic coordinates (B, L, H) of the step S1 D002 point obtained (110.0559,35.10238, 667.631), corresponding ellipsoid be WGS-84 ellipsoid, semimajor axis of ellipsoid a be 6378137m, the first eccentric ratio e be 6.6944 × 10-3, corresponding radius of curvature in prime vertical N can be expressed as N=a (1-e2sin2B)-1/2=6385208.2m.Obtain D002 point Geodetic coordinates (B, L, H) and every ellipsoidal parameter after, substituting into above-mentioned Formula of Coordinate System Transformation can obtain:
Step S2.2 calculates the corresponding satellite imagery moment t of main image;Specifically, calculating satellite according to following formula (2) Imaging moment t:
T=t0+Δt·y (2)
Wherein, t0For the starting imaging time (that is, imaging time of the first row image) of main image, Δ t is imaging time Interval, y are orientation coordinate of the corner reflector point in radar fix system.
It should be noted that radar fix system is using radar flight path direction as orientation coordinate y, with radar scanning side Two-dimensional imaging is carried out to coordinate x to for distance.
Likewise, illustrating the calculating process at satellite imagery moment by taking D002 as an example.By the starting obtained by step S1 at As time (12079.8304) and imaging time interval (0.00779883) substitution above-mentioned formula (2), can obtain:
T=12079.8304+0.00779883y.
Step S2.3, the second order polynomial being fitted between satellite position vectors and the satellite imagery moment t, the second order Shown in multinomial such as formula (3):
Wherein, (XS,YS,ZS) it is satellite position vectors to be fitted,It is multinomial that second order is fitted for position Formula coefficient matrix.It, in a short time being capable of perfect fitting satellite with second order polynomial since satellite generally flies along regular tracks Position.
Specifically, coefficient matrix of polynomial in order to obtain, needs to obtain at least three groups of observed quantities.By taking main image as an example, To in the image obtained by step S1 altogether include 13 groups of observed quantities.Each group of observation time can be originated by state vector Time (12079.8304s) and corresponding time interval (2.193425s), are calculated: ti=12079.8304+ (2.193425*i i=0,1,2..., 12)
By taking X-direction coordinate as an example, 13 equations can be constructed using 13 state position vectors that step S1 is obtained, i.e.,
X=(a0 a1 a2)T
Wherein, (- 1790516.1-1790657.7-1790799.2-1790940.7-1791082.3-X= 1791223.8 -1791365.3 -1791506.9 -1791648.4 -1791789.9 -1791931.4 -1792073 - 1792214.5) ', T=(t0 t1 … t12)′.It resolves and can get multinomial coefficient
(ax0 ax1 ax2)=(- 992845.37-67.5383 0.00012459).
It repeats the above steps respectively for Y and Z, the relational matrix between satellite position and imaging moment can be constructed:
Step S2.4, the second order polynomial being fitted between satellite velocity vector and the satellite imagery moment t, the second order Shown in multinomial such as formula (4):
Wherein,For satellite velocity vector to be fitted, t is the satellite imagery moment, For velocity fitting second order polynomial coefficient matrix.
Specifically, coefficient matrix of polynomial in order to obtain, needs to obtain at least three groups of observed quantities.By taking main image as an example, To in the image obtained by step S1 altogether include 13 groups of observed quantities.Each group of observation time can be originated by state vector Time (12079.8304s) and corresponding time interval (2.193425s), are calculated: ti=12079.8304+ (2.193425*i i=0,1,2..., 12).
With VXFor the speed of direction, 13 equations can be constructed using 13 state position vectors that step S1 is obtained, i.e.,
VX=(b0 b1 b2)T
Wherein, VX=(- 64.552448-64.551767-64.579169-64.578488-64.577807- 64.549033 -64.548349 -64.547665 -64.546981 -64.684219 -64.655433 -64.654748 - 64.544246) ', T=(t0 t1 … t12)′.It resolves and can get multinomial coefficient:
(bx0 bx1 bx2)=(- 5693.1802 0.93345-0.0000387).
For VYAnd VZIt repeats the above steps respectively, the relational matrix between satellite velocities and imaging moment can be constructed:
Step S2.5 constructs the tight imaging model of SAR, shown in the tight imaging model of SAR such as formula (5):
Wherein, VxS·(XS-XP)+VyS·(YS-YP)+VzS·(ZS-ZP)=- fd·λ·(R0+ Δ Rx)/2 to be how general Equation is strangled,For range equation, (XS,YS,ZS) it is satellite position Vector is set, (XP,YP,ZP) be corner reflector point the earth's core rectangular space coordinate,For satellite velocity vector, x is The distance of corner reflector point is to coordinate, R0For perigee oblique distance, Δ R is range resolution, fdFor doppler centroid, λ is Radar wavelength.
For main image shown in the table 1 obtained by step S1, perigee oblique distance R0(4885.9532m), distance To resolution ax R (0.5353m), doppler centroid fd(560.9245) and radar wavelength (λ=c/f=3 × 108/(3 ×105)=0.008565m) it is known quantity.
Step S2.6, the tight imaging model of SAR constructed using iterative method process of solution S2.5 solve corner reflector point Initial pixel coordinate.
Step S2.6 specifically includes following sub-step:
Step S2.6.1, the initial orientation of the Doppler equation iterative calculation corner reflector point based on formula (5) is to coordinate y;The step S2.6.1 includes following sub-step:
S2.6.1.1, the pixel coordinate initial value of set angle reflector points, it is preferred that the pixel coordinate initial value set is SAR Central point (the x of main imagecenter,ycenter);
According to an embodiment of the invention, in the main image parameter list that table 1 provides, line number 3376, columns 5400, Therefore central point (the x of main imagecenter,ycenter)=(2700,1688).
S2.6.1.2, according to the orientation y of corner reflector pointcenterCalculate satellite imagery time ty, and by polynomial fitting Calculate corresponding satellite sensor position vector (XS,YS,ZS) and velocity vector
Wherein, satellite imagery time t is calculated according to above-mentioned steps S2.2y, according to the satellite of step S2.3 and S2.4 acquisition Position and speed vector multinomial, details are not described herein.
According to an embodiment of the invention, orientation coordinate ycenter=1688, it is t that the satellite imagery time, which is calculated,y= 12092.9948s the position vector of satellite sensor is (XS,YS,ZS)=(- 1791365.573,4907370.081, 3652941.435), the velocity vector of satellite sensor is
S2.6.1.3 utilizes satellite sensor position vector (XS,YS,ZS), velocity vectorAnd corner reflection Device point position vector (XP,YP,ZP), doppler values f is calculated according to Doppler equationd, while Doppler's variation is calculated Rate fd', in conjunction with Doppler measurement fdop, the time variable Δ t for needing to adjust is calculated according to the following formula:
Δ t=- (fd-fdop)/fd'
According to an embodiment of the invention, the time variable Δ t=-0.1403s for needing to adjust is calculated.
Wherein, doppler changing rate fd' be to be found out by the method approximation of numerical differentiation, it is assumed that calculate present orientation to The corresponding time t of coordinate yyFd', if dt=0.01s, t is calculated separately using Doppler equationyAnd tyThe doppler values at the place+dt fd(ty) and fd(ty+ dt), then
fd'=(fd(ti+dt)-fd(ti))/dt
S2.6.1.4, using time variable Δ t to satellite imagery time tyIt modifies, obtains new imaging moment ty+1 =ty+Δt;
According to an embodiment of the invention, obtaining new imaging moment
ty+1=ty+ Δ t=12092.9948-0.1403=12092.8544s.
S2.6.1.5 utilizes ty+1The iterative calculation for carrying out step S2.6.1.2-S2.6.1.4, until Δ t absolute value is all small In threshold epsilon, iteration terminates, and exports main image imaging moment ty;Preferred embodiment in accordance with the present invention, threshold epsilon 10-7
According to an embodiment of the invention, main image imaging moment t is calculatedy=12092.8535s.
S2.6.1.6, by the main image imaging moment t resolvedy, the orientation coordinate of corner reflector point is calculated according to the following formula Y:y=(ty-t0)/Δty, wherein t0For initial time, Δ tyFor orientation time interval (as shown in table 1).
According to an embodiment of the invention, calculating the orientation coordinate y=1669.8915 of corner reflector point.
Step S2.6.2 calculates the initial distance of corner reflector point to coordinate x according to the range equation of formula (5).
By the orientation coordinate y of corner reflector point, satellite sensor position is calculated, is sat in conjunction with known corner reflector point geography Mark calculates satellite sensor to the oblique distance value of corner reflector point, and then calculates the distances of ground angle reflector points to coordinate x.
The step S2.6.2 includes following sub-step:
S2.6.2.1, the initial orientation of the corner reflector point obtained from step S2.6.1 is to coordinate y, according to following formula meters Calculate satellite sensor position vector (XS,YS,ZS):
Wherein, t is current calculating main image corresponding satellite imagery moment, t0For main image starting imaging time (that is, The imaging time of the first row image), Δ t is imaging time interval, and y is that orientation of the corner reflector point in radar fix system is sat Mark, (XS,YS,ZS) it is satellite position vectors to be fitted,Second order polynomial coefficient matrix is fitted for position.
According to an embodiment of the invention, the orientation coordinate y=1669.89 of known corner reflector point, is calculated satellite Sensor position vector (XS,YS,ZS)=(- 1791356.456,4907373.417,3652941.425).
S2.6.2.2, by range equationIn conjunction with angle Position vector (the X of reflector pointsP,YP,ZP), perigee oblique distance R0, range resolution Δ R, calculate corner reflection point it is initial away from Descriscent coordinate x.
Preferably, by the position vector (X of corner reflector pointP,YP,ZP)=(- 1791657.3,4907631.1, 3647548.8), perigee oblique distance R0=4885.9532m, range resolution Δ R=0.5353m, are calculated corner reflection point Initial distance to coordinate x=977.72736.
As described above, finally being acquired according to the orientation coordinate y and distance for calculating the corner reflector point obtained to coordinate x The initial pixel coordinate (x, y) of corner reflector point.By taking corner reflector point D002 as an example, the initial pixel coordinate result that resolves For (977.72736,1669.8915).
By step 2.2-step 2.6 it is found that in the tight imaging model of SAR of building, satellite position vectors (XS,YS,ZS) (13 discrete point satellite position informations), satellite velocity vector(13 discrete point satellite velocities information), satellite Imaging time section (initial time t0, interlude t1, terminate time t2)t0-t2, corner reflector point geodetic coordinates (B, L, H), Ellipsoidal parameter (such as the first eccentric ratio e, radius of curvature in prime vertical N), radar wavelength λ, perigee oblique distance R0, range resolution Δ R, doppler centroid fdEtc. parameters or intermediate auxiliary variable pass through step S1 and obtain, design parameter is as shown in table 1,2.? Formula is related to satellite position vectors and satellite velocity vector in (5), they are can be calculated according to formula (2) and (3), Y is orientation coordinate of the corner reflector in radar fix system in formula (2).So to be solved is unknown in formula (5) Only there are two (x, y) for parameter, due to being the initial picpointed coordinate obtained for the first time, so (x, y) that is calculated is corner reflection The initial coordinate of device, as shown in table 3, with (x0,y0) indicate the initial pixel of corner reflector point being calculated according to formula (5) Coordinate.
In addition, the geodetic coordinates (B, L, H) of the corner reflector point of step S1 acquisition, is fieldwork geographical coordinate, not The initial pixel coordinate of corner reflector point, so being converted to rectangular space coordinate (X by coordinateP,YP,ZP), participate in step S2.5 Solution process.Using two equations of step S2.5, the two unknown parameters (x, y) can be resolved.
Step S2.7 repeats step S2.1-S2.6, calculates the initial pixel coordinate of all corner reflector points, owned Set { the P of the initial pixel coordinate of corner reflector pointinit}.The initial pixel coordinate of calculated each corner reflector point is shown in Table 3。
The initial pixel coordinate of 3 corner reflector point of table
Step S3, the initial pixel coordinate set { P based on the step S2 corner reflector point obtainedinit, it is shaken using maximum Width algorithm resolves Pixel-level coordinate of all corner reflector points in main image, and obtains all corner reflector points in main image Pixel-level coordinate set { (Ppix)};
For the precision of step S2 generally in dozens of pixel, this is far from enough for the positioning of accurate corner reflector 's.The characteristic using SAR image itself is generally required at this time, by coordinate precision control within several pixels.In view of angle is anti- Emitter belongs to highlighted point target on image, therefore image amplitude can be used and carry out Pixel-level positioning.In position fixing process, need Interpolation is carried out using Fast Fourier Transform, therefore all window sizes used are 2 index.In solution process, still By taking D002 as an example, other corner reflector Pixel-level calculating coordinate methods are consistent with D002.
Step S3 specifically includes following sub-step:
Step S3.1, with the initial pixel coordinate (x of the corner reflector point obtained by step S20,y0) centered on, it obtains Window size is W1×W1Complex matrix;
By taking D002 as an example, with centered on the initial pixel coordinate (977.72736,1669.8915) that is obtained by step S2, Acquisition window size is W1×W1Complex matrix.Preferably, W1=128.
Step S3.2 calculates the W1×W1Complex matrix amplitude, obtain the first amplitude value matrix.
Specifically, calculating the W according to following formula1×W1Complex matrix amplitude:
Wherein, Re (i, j) indicates W1×W1Complex matrix pixel value (i, j) real part, Im (i, j) indicate W1×W1 Complex matrix pixel value (i, j) imaginary part.
Preferably, with W1For=128,128 × 128 plural image is obtained, to each pixel value in complex matrix (i, j) carries out amplitude calculating, and calculated amplitude value formula isFinally obtain 128 × 128 amplitude value matrix.
Step S3.3 finds out the peak swing value in the first amplitude value matrix, and calculates the peak swing value institute Putting the pixel coordinate in main image
It is and surplus with first pixel g (1,1) for peak swing value point for example, in 128 × 128 amplitude value matrix Remaining all amplitude points are compared g (i, j), wherein 128, j=2 i=2 ... ... 128, if g (i, j) is greater than g (1,1), then G (i, j) amplitude is assigned to g (1,1), continues to participate in subsequent calculating, until all the points traversal terminates, exports peak swing value Point.
To the first amplitude matrix obtained by step S3.2, pixel coordinate of the maximum of points in raw video is calculatedAmplitude maximum point provides the coordinate of further refining of corner reflector.If do not considered thick The influence of difference, the corner reflector positioning accuracy that amplitude maximum point provides can reach a pixel.In order to reach sub-pixel-level essence Degree, is generally required and is further refined to coordinate, i.e., obtained sub-pixel-level coordinate by the way of interpolation.
Step S3.4, with the pixel coordinateCentered on, acquisition window size is W2 × W2Complex matrix.
For example, centered on the D002 coordinate (1001,1638) of step S3.3 acquisition, acquisition window size is W2× W2Complex matrix.Because there is deviation in the result that too small window size will lead to interpolation algorithm, the window chosen at this time Mouth size should be less than the window size of step S3.3 selection, that is, W2<W1.Preferably, W2=32.
Step S3.5 calculates the W2×W2Complex matrix amplitude, obtain the second amplitude value matrix;
Specifically, calculating the W according to following formula2×W2Complex matrix amplitude:
Wherein, Re (i, j) indicates W2×W2Complex matrix pixel value (i, j) real part, Im (i, j) indicate W2×W2 Complex matrix pixel value (i, j) imaginary part.
Preferably, with W2For=32, obtain 32 × 32 plural image, to each pixel value in complex matrix (i, J) amplitude calculating is carried out, calculated amplitude value formula isFinally obtain 32 × 32 vibration Amplitude matrix.
Step S3.6 carries out N to the amplitude in second amplitude matrix1Times over-sampling, obtains inverse-transform matrix, and The amplitude for calculating each pixel in the inverse-transform matrix obtains third amplitude matrix;
In order to guarantee computing speed and calculation accuracy, over-sampling is generally carried out using Fast Fourier Transform.It is basic Principle is that the mutability information (such as texture, noise, spot etc.) in image is mostly high-frequency information, after Fourier transform, These high-frequency informations are in image edge, and low-frequency information is in image center.It is converted anyway in fourier space, as long as guaranteeing high frequency And low-frequency information is not lost, signal characteristic can remain.Step S3.6 specifically includes following sub-step:
Step S3.6.1 carries out Fast Fourier Transform (FFT) to the amplitude in second amplitude matrix, goes to plural sky Between;
Step S3.6.2 carries out N to the amplitude in the second amplitude matrix for going to complex number space1Times over-sampling;
The complex number space is divided into four parts, four parts of the upper left corner and bottom right angular coordinate are respectively as follows: A [(1,1), (W2/ 2,W2/2)],B[(W2/2+1,1),(W2,W2/2)],C[(1,W2/2+1),(W2/2,W2)],D[(W2/2+1,W2/2+1),(W2, W2)], while create an empty matrix M, for store the frequency-region signal through Fast Fourier Transform (FFT), finally to pass through assignment it Matrix M afterwards carries out inverse fast Fourier transform, obtains the complex matrix after inverse transformation, this process is to the second amplitude Amplitude in matrix carries out N1Times over-sampling;
Step S3.6.2 specifically includes following sub-step:
The complex number space is divided into four parts by S3.6.2.1, and four parts of the upper left corner and bottom right angular coordinate are respectively as follows: A [(1,1),(W2/2,W2/2)],B[(W2/2+1,1),(W2,W2/2)],C[(1,W2/2+1),(W2/2,W2)],D[(W2/2+1, W2/2+1),(W2,W2)];
Preferably, by taking D002 corner reflector point as an example, four groups of coordinates are respectively [(1,1), (16,16)] A, B [(17, 1),(32,16)],C[(1,17),(16,32)],D[(17,17),(32,32)]。
S3.6.2., 2, empty matrix M is created, is used to store the frequency-region signal through Fast Fourier Transform (FFT), wherein described The size of empty matrix M is (W2×N1)×(W2×N1), wherein N1For over-sampling multiple, by the upper left of the empty matrix, upper right, a left side Under, (the W of bottom right four corners2/2)×(W2/ 2) pixel is assigned a value of A, B, C, D respectively, and the matrix M after assignment participates in next step Inverse Fourier transform.
Empty matrix is the frequency-region signal being used to store by Fast Fourier Transform (FFT).Its cardinal principle is, frequency domain with During spatial domain is mutually converted, the dimension of matrix can't change.Therefore, in order to guarantee the matrix after over-sampling Size is met the requirements, and the size of empty matrix M must be consistent with the matrix size after over-sampling.
Preferably, the block of pixels that the upper left of the empty matrix M, upper right, lower-left, each angle in the quadrangle of bottom right occupy is 16 × 16, in addition, over-sampling multiple N1=4, then (W2×N1)×(W2×N1)=128 × 128.A, the size of B, C, D block is 16 × 16, contain 256 data values, by taking A block as an example, since A matrix-block is complex data, so there are [(3.14,4.48)Ii=1, (5.01,1.70)Ii=2,(0.43,-1.94)Ii=3,…,(-4.51,1.70)Ii=255,(0.02,0.09)Ii=256], wherein ii= 1 to 256.
S3.6.2.3 carries out inverse fast Fourier transform to by the empty matrix M after assignment, obtains inverse-transform matrix, Wherein, the inverse-transform matrix is still complex matrix form;
The treatment process of above three sub-step as carries out N to the amplitude in the second amplitude matrix1Times over-sampling.
Step S3.6.3 calculates the amplitude of each pixel in the inverse-transform matrix, obtains third amplitude matrix.
Specifically, calculating the amplitude of the inverse-transform matrix according to following formula:
Wherein, Re (i, j) indicates that the real part of the pixel value (i, j) of the inverse-transform matrix, Im (i, j) indicate the inversion Change the imaginary part of the pixel value (i, j) of matrix.
Step S3.7 finds out the peak swing value in the third amplitude value matrix, and calculates the peak swing value institute Putting the Pixel-level coordinate (x in main image1,y1)。
Still by taking corner reflector point D002 as an example, the pixel coordinate (x that is calculated through the above steps1,y1)= (1000.5,1638.0625)。
Step S3.8 repeats step S3.1-S3.7 to all corner reflector points, obtains all corner reflector points in main image In Pixel-level coordinate set { (Ppix), it is specific as shown in table 4.
4 corner reflector point Pixel-level image coordinate of table
Step S4, Pixel-level the coordinate set { (P of diagonal reflector pointspix) in all corner reflector points Pixel-level sit Mark carries out interferometric parameter measurement verification, obtains the sub-pixel-level coordinate of corner reflector point;
The SAR that this step is constructed according to step S2 tight imaging model is simultaneously handled, three in conjunction with interferometric phase equation A equation group (two equation groups and interferometric phase equation of the tight imaging model of SAR) can be under the rectangular coordinate system in space of the earth's core The three-dimensional coordinate of ground target point, referred to as landform Three-dimension Reconstruction Model are described.And the point of the corner reflector obtained in step S2.1 For coordinate equally under the rectangular coordinate system in space of the earth's core, the angle that can be obtained accordingly with landform Three-dimension Reconstruction Model evaluation procedure S3 is anti- The Pixel-level image coordinate positioning accuracy of emitter point obtains the sub-pixel-level coordinate (x of final corner reflector point2,y2)。
Under normal conditions, influence of the landform Three-dimension Reconstruction Model vulnerable to interference system parameter error needs to utilize high-precision Corner reflector point (i.e. control point) carry out interferometry calibration, obtain the higher interference system parameter of precision.However, corner reflection Influence of the device point vulnerable to SAR speckle noise, there is a certain error for the image pixel coordinate of acquisition, can use landform Three-dimensional Gravity Established model carries out the evaluation of corner reflector point pixel precision and rejects the biggish point of error.Based on above-mentioned analysis, step S4 is first The basic relational expression of landform Three-dimension Reconstruction Model is constructed, and Three-dimension Reconstruction Model is derived to interference based on landform Three-dimension Reconstruction Model The partial derivative of system parameter constructs interferometry calibration model accordingly;Then the Pixel-level image obtained by step S3 is sat Mark set { PpixLeast-squares iteration calculating is carried out to interferometry calibration model, optimize and export interference system parameter, then The precision before and after each corner reflector point adjustment is counted, the biggish point of error is rejected, is iterated calculating again, until all the points Meet sets requirement, the sub-pixel-level positioning result of output angle reflector points.
Step S4 specifically includes following sub-step:
Step S4.1 is based on the tight imaging pattern of SAR and interferometric phase equation, determines three based on visual direction amount Orthogonal Decomposition Tie up reconstruction model;
In this step, it is first determined the basic relational expression of Three-dimension Reconstruction Model, the basic pass of the Three-dimension Reconstruction Model Be formula be such as formula (1) shown in including range equation, Doppler equation and interferometric phase equation equation group, then, according to Range equation, Doppler equation and interferometric phase equation, derive the Three-dimension Reconstruction Model based on visual direction amount Orthogonal Decomposition.
Wherein, VxS·(XS-XP)+VyS·(YS-YP)+VzS·(ZS-ZP)=- fd·λ·(R0+ Δ Rx)/2 to be how general Equation is strangled,For range equation,For Interferometric phase equation, (XS,YS,ZS) it is satellite position vectors, (XP,YP,ZP) be corner reflector point the earth's core rectangular space coordinate,For satellite velocity vector, x is the distance of corner reflector point to coordinate, R0For perigee oblique distance, Δ R is distance To resolution ratio, fdFor doppler centroid, λ is radar wavelength,For absolute interferometric phase, φ is winding interferometric phase, and k is Complete cycle issue, Q are interference pattern,For the distance of satellite primary antenna to target point,For satellite slave antenna to target point away from From.
Range equation, Doppler equation and interferometric phase equation, the equation group that they are constituted is nonlinearity, directly It is very difficult for being solved by numerical solution to them, therefore is just needed through analysis InSAR geometrical model (InSAR geometry Include three equations: range equation, Doppler equation and interferometric phase equation in model) their (three equations) are passed through three Dimension reconstruction model expression-form is solved.And visual direction amount Orthogonal Decomposition method avoids the solution of the Nonlinear Numerical between them Relationship has definite conception, solves the features such as easy.Based on this, the present invention uses visual direction amount Orthogonal Decomposition rule, passes through visual direction The form of Orthogonal Decomposition and the coordinate conversion of amount obtains the three-dimensional coordinate analytic solutions of target point.
As shown in Fig. 2, the present invention is based on the tight imaging pattern of SAR and interferometric phase equation, it is identified to be based on visual direction amount The Three-dimension Reconstruction Model of Orthogonal Decomposition are as follows:
Wherein, P is coordinate of ground point, S1For satellite position vectors, r1For visual direction amount,Indicate the corresponding satellite of main image Position to ground target distance (wherein,That is,For visual direction amount r1Mould it is long),For visual direction amount r1After normalizing Obtained unit visual direction amount.
Wherein, coordinate of ground point P is by the corresponding satellite position vectors S of main image1With visual direction amount r1The triangle constituted it And expression, the corresponding satellite position vectors S of main image need to be calculated1, the distance r of the corresponding satellite of main image to ground targets1, with And unit visual direction amount
It is well known that Three-dimension Reconstruction Model can describe corner reflector point image picpointed coordinate and ground point geographical coordinate it Between mathematical relationship, however as required for Three-dimension Reconstruction Model solution parameter there are error, need to model by Three-dimensional Gravity Type determines error term, and constructs error model (that is, interferometry calibration model) by Three-dimension Reconstruction Model, is missed with this to eliminate Poor item provides accurate interference system parameter information and participates in subsequent calculating.In the following step, step S4.1 is determined first Each variable of Three-dimension Reconstruction Model derived, obtain its analytic solutions expression-form (specifically including step S4.2-S4.4); Secondly, the parameter for constituting the analytic solutions expression-form of Three-dimension Reconstruction Model certainly exists error, it is necessary to be modeled by Three-dimensional Gravity Type determines main error source, and partial derivative of the Three-dimension Reconstruction Model function to main error source, structure are derived on tri- directions X, Y, Z Interferometry calibration model is built, is optimized interference system parameter (specifically including step S4.5-S4.8);Finally modeled using Three-dimensional Gravity Type and interferometry calibration model, optimize interference system parameter, excluding gross error, repeat step S4.5-S4.8, until institute There is corner reflector point to be all satisfied threshold requirement, the sub-pixel-level coordinate of output angle reflector points (specifically includes step S4.9- S4.12).Each step is described in detail below.
Step S4.2 calculates the corresponding satellite position vectors S of main image1, calculation formula isIts In, (XS,YS,ZS) it is satellite position vectors to be fitted,Second order polynomial coefficient square is fitted for position Battle array, t are the main image corresponding satellite imagery moment.
Preferably, the satellite position vectors after fitting are calculated according to step S2.3
Step S4.3, the distance of the corresponding satellite position of calculating main image to ground targetWith unit visual direction amount
According to visual direction amount proper orthogonal decomposition, in VNW moving coordinate system, by unit visual direction amountBe decomposed into V axis, N axis and Component on W axis, is embodied as:Wherein, rvFor unit visual direction amountComponent on V axis, rnFor unit visual direction AmountComponent on N axis, rwFor unit visual direction amountComponent on W axis.
That is, step S4.3 calculating isWith unit visual direction amount
The basic thought of visual direction amount proper orthogonal decomposition be using antenna phase center as under the VNW moving coordinate system of origin, Orthogonal Decomposition is carried out to unit visual direction amount.
The definition of VNW moving coordinate system: using phase center where primary antenna as coordinate origin, V axis is defined as carrier aircraft speed Direction vector, W axis are defined as carrier aircraft velocity vector and interfere the cross product of baseline vector, and N axis is jointly true by the cross product of V axis and W axis It is fixed, constitute right-handed coordinate system.In interference SAR Data processing, moving coordinate system is generally used for solving unit visual direction amount.In general, Three orthogonal basis are named as v, n and w, successively indicate V axis, N axis and W axis, referred to as VNW system.
Three orthogonal basis under VNW moving coordinate systemIts expression formula are as follows:
In above formula, b is interference baseline vector, and v is carrier aircraft flight velocity vector.
Step S4.3 specifically includes following sub-step:
Step S4.3.1 is based on unit visual direction amountUnit of account visual direction amountComponent r on V axisv, table Up to formula are as follows:
Wherein, v is satellite flight velocity vector, | v | long for speed mould, r is visual direction amount,Mould for visual direction amount r is long.
Step S4.3.2 is based on unit visual direction amountUnit of account visual direction amountComponent r on N axisn, table Up to formula are as follows:
Wherein,Indicate primary antenna to target point distance, | bpv| for the baseline vector mould perpendicular to satellite velocities direction It is long, bvProjection value for baseline vector in satellite velocities direction, rvFor unit visual direction amountComponent on V axis, B are baseline mould Long, λ is radar wavelength,For absolute interferometric phase, Q is antenna operating mode, wherein when the operating mode of antenna is attached most importance to rerail When road mode, Q=2, when the operating mode of antenna is a hair pair receipts mode, Q=1.Preferably, in the present embodiment, Q=1.
Step S4.3.3, according to the pixel coordinate { P of corner reflector pointpixObtain corner reflector point absolute interferometric phaseWherein φ1To wind phase, φ2For level land phase.Wind phase pacifically phase by corresponding data file It provides, is read according to the pixel coordinate information of the step S3 corner reflector obtained.Obtain the absolute interference of corner reflector point The range difference that phase is used to calculate major and minor antenna phase center to ground target pointIn step It participates in calculating in the formula (4) of S4.3.2.
Step S4.3.4 describes unit visual direction amount in step S4.3Due toFor unit visual direction amount, Expression formula meets right-hand rule, unit visual direction amountComponent r on W axiswIt indicates are as follows:
It is (including main image initial time, more to the imaging parameters obtained by step S1 by taking D002 corner reflector point as an example General Le multinomial, nearly oblique distance, radar wavelength, 13 satellite position vectors and velocity vector, azimuth resolution, distance are to resolution Rate, baseline vector, semimajor axis of ellipsoid, semiminor axis of ellipsoid), the step S2.3 fitting satellite position vectors and satellite imagery moment Second order polynomial, step S2.4 fitting satellite velocity vector between t and the second order between the satellite imagery moment t are multinomial Formula is computed
It is defined according to VNW coordinate system it is found that when W axis is downwardly directed, takes positive sign;When W axis points up, negative sign is taken, here It points up or lower section, and non-critical direction zenith or the earth's core direction.Preferably, positive sign is taken in the present embodiment.
Step S4.3.5 has been described that the tight imaging model of SAR is in the earth's core rectangular coordinate system in space in step S2.2 Lower completion, the geographical coordinate of the target point P provided is also to indicate under the rectangular coordinate system in space of the earth's core, and the earth's core is empty in order to obtain Between target point P under rectangular coordinate system unit visual direction amount, need the corner reflector point coordinate conversion under VNW moving coordinate system To the earth's core rectangular coordinate system in space, transition matrixAre as follows:
Wherein, b is interference baseline vector, and v is satellite flight velocity vector,For the unit vector of satellite flight speed.
Preferably, the imaging parameters (including baseline vector, satellite velocity vector), the step S2 that are obtained by step S1 are obtained The satellite velocity vector obtained, has been computed:
Step S4.4, the corresponding satellite position vectors S of main image obtained based on step S4.21, step S4.3 obtains WithDerive the Three-dimension Reconstruction Model are as follows:
Wherein, P is coordinate of ground point, S1For satellite position vectors, r1For visual direction amount, rs1Indicate the corresponding satellite of main image To the distance of ground target point, For transition matrix,For unit visual direction amount, rvFor unit visual direction amount Component on V axis, rnFor unit visual direction amountComponent on N axis, rwFor unit visual direction amountComponent on W axis, b are dry Baseline vector is related to, v is carrier aircraft flight velocity vector,For the unit vector of flying speed, λ is radar wavelength,It is absolutely dry Phase is related to, Q is antenna operating mode, | bpv|, b long for the baseline vector mould perpendicular to satellite velocities directionvExist for baseline vector The projection value in satellite velocities direction, fdopFor doppler centroid, B is that interference baseline mould is long.
Illustrate the process calculated, the corresponding satellite position vectors of main image by taking corner reflector point D002 as an exampleOblique distance rs1=5421.521, spin matrixAnd Unit visual direction amountHave:
Step S4.2-S4.4 has carried out the derivation of equation and description to the basic representation that step S4.1 is provided, it is determined that three The Analytical Expression form for tieing up reconstruction model, provides some theoretical reference foundations for subsequent step S4.5-S4.8, will be according to step The Three-dimension Reconstruction Model that S4.2-S4.4 is determined carries out the calculating of partial derivative according to the Three-dimension Reconstruction Model to interference system parameter, The building of interferometry calibration model (error model) and least square solve.Step S4.5- is detailed below The treatment process of S4.8.
Step S4.5 rewrites the Three-dimension Reconstruction Model of step S4.4, obtains the observational equation of Three-dimension Reconstruction Model;
During the Three-dimension Reconstruction Model that S4.1-S4.4 is established, every system parameter is inevitably by error It influences, in order to obtain accurate every system parameter, needs to carry out calibration processing, optimization interference system ginseng using corner reflector point Number, exports high-precision interference system parameter, provides reliable interference system ginseng finally to obtain high-precision corner reflector coordinate Number.Interferometry calibration model derives the partial derivative of each system parameter based on Three-dimension Reconstruction Model, and then utilizes minimum two Multiply principle solving system parameter, optimization and the pixel coordinate for improving corner reflector point, and excluding gross error.Specifically, to formula (7) It is rewritten, obtains its function representation form, the i.e. observational equation of Three-dimension Reconstruction Model:
F in formulax(X)、Fy(X)、FzIt (X) is respectively the function of X, Y, Z-direction,For to excellent The interference system parameter of change, blFor baseline length, bαFor baseline angle, r0For nearly oblique distance,For interferometric phase, fdFor Doppler's frequency Rate, these parameters both participate in the calculating of Three-dimension Reconstruction Model, and there are certain error, need to carry out these parameters excellent Change processing;S1x,S1y,S1zFor the corresponding coordinate of the satellite position of main image,For main image to ground corner reflector (i.e. appearance Punctuate) oblique distance, u11,u12,u13,u21,u22,u23,u31,u32,u33For spin matrixCalculating variable;rvFor unit visual direction AmountComponent on V axis, rnFor unit visual direction amountComponent on N axis, rwFor unit visual direction amountComponent on W axis; Px,Py,PzFor the earth's core rectangular space coordinate of corner reflector, the major and minor image parameters information acquisition provided by step S1.
Step S4.6 calculates the partial derivative of the interference system parameter in the observational equation of the Three-dimension Reconstruction Model;
According to the formula (8) of step S4.5, formula (8) is to interference system parameter (including blFor baseline length, bαFor baseline Inclination angle, r0For nearly oblique distance,For interferometric phase, fdFor Doppler frequency) derivation linearized to obtain error equation (that is, interference Measure calibration model), it is therefore desirable to the partial derivative for calculating interference system parameter specifically includes following sub-step:
Step S4.6.1 calculates separately corner reflector point F on tri- directions X, Y, Z according to formula (8)x、FyAnd FzFunction To baseline length blPartial derivative.We are with FxTo baseline length blIt asks explanation for partial derivative to seek the process of partial derivative, is computed HaveWhereinFor the oblique of main image to ground corner reflector (i.e. ground target point) Away from (distance), spin matrixUnit visual direction amount rvFor unit visual direction amountComponent on V axis, rnFor unit visual direction amountComponent on N axis, rwFor unit visual direction amountIn W Component on axis, finds through analysis:And u11,u12,u13Calculating process be not related to baseline length bl, however's Calculating needs to use baseline length bl, soAnd u11,u12,u13To baseline length blPartial derivative be 0, unit visual direction amountTo baseline length blPartial derivative be not 0, be calculative, last FxTo baseline length blPartial derivative is asked just to convert For unit visual direction amountTo baseline length blSeek partial derivative.FyAnd FzTo baseline length blAsk partial derivative and FxTo baseline length blIt asks Partial derivative is similar.Therefore, the F on tri- directions X, Y, Zx、FyAnd FzFunction is to baseline length blAsking partial derivative conversion is that unit regards VectorTo baseline length blIt differentiates, obtains:
Wherein, blFor baseline length;Unit visual direction amountrvFor unit visual direction amountComponent on V axis, rnFor Unit visual direction amountComponent on N axis, rwFor unit visual direction amountComponent on W axis;For main image to ground corner reflection The oblique distance of device point, Δ r are the distance satellite corresponding with main image of the corresponding satellite of main image to corner reflector point to corner reflection The difference of the distance of device point.
Step S4.6.2 calculates separately F on tri- directions X, Y, Z according to formula (8)x、FyAnd FzFunction is to baseline angle bα Partial derivative, the unit visual direction amount in moving coordinate systemTo baseline angle bαDerivative be all 0, baseline angle bαOnly establishing It is had an impact when moving coordinate system, to bαDerivative are as follows:
Wherein, bαFor baseline angle, unit visual direction amountrvFor unit visual direction amountComponent on V axis, rnFor Unit visual direction amountComponent on N axis, rwFor unit visual direction amountComponent on W axis;,For main image to ground corner reflection The oblique distance of device,For spin matrix.
In formula (11)To bαPartial derivative is sought, is had:
Wherein,WithFor three orthogonal basis under VNW moving coordinate system, bαFor baseline angle,For spin moment Battle array.
Step S4.6.3 calculates separately F on tri- directions X, Y, Z according to formula (8)x、FyAnd FzFunction is to initial oblique distance Partial derivative.Oblique distance of the primary antenna to ground angle reflector pointsWith initial oblique distance r0Between relationship:Along away from The image coordinate of descriscent Shangdi millet cake is nrg, the pixel resolution in oblique distance is rps.Formula (9) is to r0It differentiates, obtains:
Wherein, Fx、FyAnd FzThe function of respectively X, Y, Z-direction, unit visual direction amountrvFor unit visual direction amount Component on V axis, rnFor unit visual direction amountComponent on N axis, rwFor unit visual direction amountComponent on W axis, u11, u12,u13,u21,u22,u23,u31,u32,u33For spin matrixCalculating variable, r0For nearly oblique distance,For primary antenna to ground The oblique distance of corner reflector point.
Unit visual direction amount is sought againTo r0Derivative:
In above formula, the baseline vector component vertical with satellite velocities directionFormula (14) are substituted into formula (13) In seek r0Partial derivative.Wherein,For primary antenna to the oblique distance of ground angle reflector points, unit visual direction amountrvFor list Position visual direction amountComponent on V axis, rnFor unit visual direction amountComponent on N axis, rwFor unit visual direction amountOn W axis Component, bpvFor the baseline vector component vertical with satellite velocities direction, Δ r is main subpictures satellite position to corner reflector point Apart from its difference, blFor baseline length.
Step S4.6.4 calculates separately F on tri- directions X, Y, Z according to formula (8)x、FyAnd FzFunction is to interferometric phase Partial derivative, obtain:
Wherein, unit visual direction amountIt is rightDerivative are as follows:
Wherein,For interferometric phase,For primary antenna to the oblique distance of ground angle reflector points, unit visual direction amount rvFor unit visual direction amountComponent on V axis, rnFor unit visual direction amountComponent on N axis, rwFor unit visual direction amountIn W Component on axis, bpvFor the baseline vector component vertical with satellite velocities direction,For absolute interferometric phase, blFor base length Degree;Q is antenna operating mode, wherein when the operating mode of antenna is repeat track mode, Q=2, when the Working mould of antenna When formula is a hair pair receipts mode, Q=1;bpvFor the baseline vector component vertical with satellite velocities direction, λ is radar wavelength.
Step S4.6.5 calculates separately F on tri- directions X, Y, Z according to formula (8)x、FyAnd FzFunction is to Doppler center Frequency fdPartial derivative, obtain:
Wherein, unit visual direction amountTo doppler centroid fdDerivative are as follows:
In formula, fdFor doppler centroid, bvFor the interference baseline projection components parallel with satellite velocities direction;Unit Visual direction amountrvFor unit visual direction amountComponent on V axis, rnFor unit visual direction amountComponent on N axis, rwFor Unit visual direction amountComponent on W axis, bpvFor the baseline vector component vertical with satellite velocities direction, λ is radar wavelength, | v | it is long for the mould of satellite velocity vector.
Step S4.7, the partial derivative of the interference system parameter based on the obtained observational equation of step S4.5 and S4.6 derivation, Nonlinear least squares fitting is carried out to interference system parameter using several corner reflector points;
Interferometry calibration model is constructed using corner reflector point.Assuming that in SAR image, there are mGA corner reflector point, There is error equation (that is, interferometry calibration model) at each corner reflector point:
In formula, vxG(i),vyG(i),vzG(i)Respectively corner reflector point Gx(i),Gy(i),Gz(i) X, Y, Z-direction it is residual Difference,Respectively FxG(i),FyG(i),FzG(i)Function is to baseline length blIn X, Y, the local derviation of Z-direction Number;Respectively FxG(i),FyG(i),FzG(i)Function is to baseline angle bαIn X, Y, the local derviation of Z-direction Number;Respectively FxG(i),FyG(i),FzG(i)Function is to nearly oblique distance r0In X, Y, the partial derivative of Z-direction;Respectively FxG(i),FyG(i),FzG(i)Function is to interferometric phaseIn X, Y, the partial derivative of Z-direction;Respectively FxG(i),FyG(i),FzG(i)Function is to Doppler frequency fdopIn X, Y, the local derviation of Z-direction Number;lxG(i),lyG(i),lzG(i)For the difference structure for the geocentric coordinates that geocentric coordinates and Three-dimension Reconstruction Model known to corner reflection point calculate At constant term, corner reflector point G (i) indicate i-th of corner reflector point, i=1,2 ..., nG, add up to nGA corner reflector Point.
Step S4.8 resolves interference system parameter based on the principle of least square, specifically includes following sub-step:
Step S4.8.1 lists error equation according to error model listed by formula (18)
V=A Δ x-l (19)
Wherein,
Matrix V is residual error item matrix, wherein vxG(1), vyG(1), vzG(1)Indicate a corner reflector point of G (1) in X, Y, the side Z To residual error,Indicate G (mG) a corner reflector point is in X, Y, the residual error of Z-direction;Matrix A is Partial derivative coefficient matrix, whereinRespectively FxG(i),FyG(i),FzG(i)Function is to baseline length bl? X, Y, Z-direction partial derivative;Respectively FxG(i),FyG(i),FzG(i)Function is to baseline angle bαX, Y, the partial derivative of Z-direction;Respectively FxG(i),FyG(i),FzG(i)Function is to nearly oblique distance r0In X, Y, Z The partial derivative in direction;Respectively FxG(i),FyG(i),FzG(i)Function is to interferometric phaseIn X, Y, the side Z To partial derivative;Respectively FxG(i),FyG(i),FzG(i)Function is to Doppler frequency fdopIn X, Y, Z The partial derivative in direction, i=1,2 ..., nG;Matrix l is constant term matrix lxG(i),lyG(i),lzG(i)Respectively LxG(i)-L0,xG(i), LyG(i)-L0,yG(i), LzG(i)-L0,zG(i)It is calculated, i=1,2 ..., nGFor i-th of corner reflector point of calculating, nGFor corner reflection Device point total number;Δ x is that interference system parameter to be solved corrects moment matrix, baseline length reduction Δ bl, baseline angle changes Positive quantity Δ bα, nearly oblique distance reduction Δ r0, interferometric phase reductionWith Doppler frequency reduction Δ fd
Step S4.8.2, the video imaging parameter (shown in table 1) obtained by step S1, design factor matrix A, in step Its calculation formula has been described in S4.7 and S4.8, brings calculating into;The corner reflection point the earth's core space obtained by step S2 is straight Angular coordinate, computational constant item l=L0- L, L are calculated by Three-dimension Reconstruction Model formula (7), L0Angle is surveyed by field measurement personnel Reflector points position obtains.
Design factor matrix A, calculation formula are shown below:
We calculate separately D002 point F on tri- directions X, Y, Z by taking corner reflector point D002 as an examplex、FyAnd FzFunction To baseline length blPartial derivative, illustrate specific calculating process.Three-dimensional Gravity modeling is described in the formula (8) of step S4.5 The function representation form of type, is shown below:
Calculate D002 point F on tri- directions X, Y, Zx、FyAnd FzFunction is to baseline length blPartial derivative, haveBe related to main image to ground corner reflector oblique distanceSpin matrixUnit visual direction amountTo baseline length blPartially fall Number
It calculatesAccording to step S2.5 it is found thatAccording to provided in table 4 D002 point pixel coordinate (x, Y)=(1000.5,1638.0625);Nearly oblique distance R in the table 1 that step S is provided0=4885.9532m, distance is to sampling interval Δ R =0.536m;It finally obtains
Calculating spin matrix can according to step S4.3.5 Know, in conjunction with velocity vector and baseline vector that step S1 is provided, obtains
Unit of account visual direction amount is to baseline length blCalculating partially reciprocal, the number that can be obtained according to step S4.3.1, S4.3.2 and S4.3.3 Value and bl=0.312816m, It can obtain
For baseline angle bα, nearly oblique distance r0, interferometric phaseDoppler frequency fdopIt repeats the above steps respectively Obtain coefficient matrices A.D002 point F on tri- directions X, Y, Z is listed belowx、FyAnd FzFunction is to baseline length bl, baseline inclines Angle bα, nearly oblique distance r0, interferometric phaseDoppler frequency fdopPartial derivative, it is as follows:
Computational constant item matrix l, calculation formula are shown below:
We have by taking corner reflector point D002 as an exampleWherein Lx,Ly,LzBy the three-dimensional of step S4.4 Reconstruction model formula (7) is calculated;L0,x,L0,y,L0,zThe longitude that is there is provided by table 2, latitude, elevation i.e. (110.0559, 35.10238,667.631) geodetic coordinates at the control point of corner reflector, is converted to the earth's core space right-angle through step S2.1 to sit Mark;It finally obtains
lx,D002=L0,x-Lx=-1791657.022- (- 1791633.082)=- 23.940
ly,D002=L0,y-Ly=4907631.293-4907598.887=32.406.
lz,D002=L0,z-Lz=3647548.734-3647534.327=14.406
Step S4.8.3, adjustment criterion select common criterion of least squares: VTV=min;Solve interference system parameter Correction has:
Δ x=(ATA)-1ATl (20)
Wherein, Δ x is that interference system parameter to be solved corrects moment matrix, and A is the coefficient that step S4.8.2 is calculated Matrix, ATFor the transposed matrix of matrix A, l is constant term matrix.
Step S4.8.4 calculates the root mean square of the coordinate difference of each iteration front-and-back angle reflector points, judge whether accordingly into The iterative calculation of row next step, formula are as follows:
In formula, eps is the root mean square of the coordinate difference of each iteration front-and-back angle reflector points,Indicate i-th of corner reflection The coordinate that kth time after the error model optimization that device point is constructed according to step S4.8 iterates to calculate out,Indicate i-th jiao it is anti- Kth -1 time coordinate iterated to calculate out after the error model optimization that emitter point is constructed according to step S4.8, nGFor corner reflector Point total number, when eps is less than the threshold value of setting, adjustment terminates, and counts the residual error before and after each corner reflector point adjustment.If adjustment In the process, abnormal end is then needed to check relevant parameter, be recalculated, until meeting the requirements.
Preferably, eps takes 10-4
In step S4.5-S4.8, the calculating of partial derivative is carried out to interference system parameter according to the Three-dimension Reconstruction Model, is done Relate to measurement calibration model (error model) building and least square solve and etc., be fully described pushing away for error model It leads, construct, solving, exporting the treatment processes such as result, providing corner reflector point coordinate for subsequent step S4.9-S4.12.And In step S4.9-S4.12, the pixel precision of diagonal reflector points evaluate simultaneously excluding gross error, exports final corner reflection Device point sub-pixel-level coordinate.
Step S4.9, the interference system parameter obtained using step S4.8, to the corner reflector picpointed coordinate in step S3 Interferometric parameter measurement verification is carried out, the biggish corner reflector point of error is rejected;
By step S4.8 it is found that the Pixel-level image coordinate { P that step S3 is obtainedpix, since SAR image is made an uproar vulnerable to spot Certainly there is the biggish point of error in the influence of sound, these points will affect the precision of interferometry calibration, accordingly can be according to interference It measuring calibration model and carries out Detection of Gross Errors and rejecting, the automatic rough error point for detecting corner reflector iterates to calculate, excluding gross error point, It realizes sub-pixel-level automatic positioning, specifically includes following sub-step:
The Pixel-level image coordinate set { P that step S4.9.1, step S3 are obtainedpixIn all corner reflector picpointed coordinates Interferometry calibration compensating computation is both participated in, until meeting least square constraint condition, iteration terminates, at this time will be anti-by model The interference system parameter drilled calculates the corresponding the earth's core 3 d space coordinate of corner reflector point, knot according to the formula (7) of step S4.4 Fruit is denoted as { PXYZ_m}。
The corresponding the earth's core 3 d space coordinate fundamental formular of corner reflector point is calculated to be shown below:
So according to interference system parameter (including the baseline length b after optimizationl, baseline angle bα, nearly oblique distance r0, interference phase PositionWith Doppler frequency fd) and major and minor image parameters list (including baseline vector, 13 satellite positions arrow that provides of step S1 Amount and velocity vector, radar wavelength, nearly oblique distance) and step S3.8 provide 4 corner reflector point pixel coordinate of table.One by one by angle Reflector points bring formula calculating into, are as a result denoted as { PXYZ_m}。
The earth's core rectangular space coordinate collection known to the corner reflection point obtained by step S2.1 is combined by step S4.9.2 {PXYZ, calculate two set { PXYZAnd { PXYZ_mIn the plane mean square error of a point { P of X-direction and Y-directionXYZ_err, and obtain flat The mean value and standard deviation μ, σ of millet cake position error of coordinate, there is following calculation formula:
PXY_err=SQRT ((PXYZ_m(X)-PXYZ(X))2+(PXYZ_m(Y)-PXYZ(Y))2)
Wherein, PXYZ_errFor the plane mean square error of a point of X-direction and Y-direction, SQRT () is to open radical sign operation, PXYZ_m(X) The earth's core 3 d space coordinate { the P obtained for step S4.9.1XYZ_mX-direction, PXYZIt (X) is the corner reflector point of field operation actual measurement The earth's core 3 d space coordinate { PXYZX-direction, PXYZ_m(Y) the earth's core 3 d space coordinate obtained for step S4.9.1 {PXYZ_mY-direction, PXYZIt (Y) is the earth's core 3 d space coordinate { P of the corner reflector point of field operation actual measurementXYZY-direction.μ is The mean value of plane point error of coordinate, PXY_errIt (i) is { PXYZ_errElement in set, nGFor corner reflector point total number.σ is The standard deviation of plane point error of coordinate.
Step S4.9.3, according to the distance of the step S1 image obtained to, azimuth resolution be (xRes,yRes), it can be with Obtain corresponding picture point ground resolutionIf the standard deviation sigma of the plane point of step S4.9.2 is greater than Picture point ground resolution (xy)Res, then there are the biggish corner reflector points of error for explanation.
Preferably, (xy)Res=0.75m.
Step S4.9.4, the standard deviation sigma and picture point ground resolution k* (xy) of Calculation Plane pointResDifference, it is contemplated that angle The positioning accuracy of reflector points will meet the requirement of sub-pixel-level, it is necessary to be better than a pixel or the corresponding ground of a pixel Unit-sized is differentiated, k indicates the proportionality coefficient k ∈ (0.1,1) for being better than a ground resolution cell.According to 1 times of standard deviation sigma threshold value Excluding gross error, shown in following formula, if σ > k* (xy)Res, then retain the point that numberical range in { e } meets following condition, be discontented with The point of sufficient condition is automatically set as Null Spot, is no longer participate in compensating computation, until all corner reflector points for participating in adjustment meet The threshold condition of setting;
e∈(μ-σ,μ+σ)
Preferably, k=0.5, k* (xy)Res=0.375m.
Step S4.9.5 carries out interferometry calibration adjustment and elimination of rough difference operation by step S4.7-S4.9.4, passes through Operation is terminated three times, meets sub-pixel-level positioning accuracy request, is obtained model optimization parameter (that is, interference system parameter), system It is as shown in table 5 below to count result:
5 statistical model operation result of table
Step S4.10, the model optimization parameter obtained using step S4.9.5 (including baseline length bl, baseline angle bα、 Nearly oblique distance r0, interferometric phaseAnd Doppler frequency), angle is recalculated according to the tight imaging model of SAR that abovementioned steps S2 is constructed The image picpointed coordinate of reflector points, the sub-pixel-level corner reflector point coordinate finally obtained are (x2,y2), and by all coordinates Set is denoted as { Ppix_fIt is shown in Table 6.
Corner reflector point image coordinate after the optimization of 6 geometrical relationship of table
The above is only a preferred embodiment of the present invention, for those of ordinary skill in the art, according to the present invention Thought, there will be changes in the specific implementation manner and application range, and the content of the present specification should not be construed as to the present invention Limitation.

Claims (10)

1. a kind of sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR, which is characterized in that the side Method specifically includes the following steps:
S1, acquires initial data, the initial data include the coordinate of corner reflector point, main image and the main image at As parameter;
S2 constructs the tight imaging model of SAR using the initial data of acquisition, and the tight imaging model of the SAR based on building calculates Initial pixel coordinate (the x of all corner reflector points0,y0), obtain the set of the initial pixel coordinate of all corner reflector points {Pinit};
S3, the initial pixel coordinate set { P based on the step S2 corner reflector point obtainedinit, it is resolved using peak swing algorithm Pixel-level coordinate x1, y1 of all corner reflector points in main image, and obtain picture of all corner reflector points in main image Plain grade coordinate set { (Ppix)};
S4, Pixel-level the coordinate set { (P of diagonal reflector pointspix) in the Pixel-level coordinates of all corner reflector points done Parameter measurement verification is related to, sub-pixel-level the coordinate x2, y2 of corner reflector point are obtained.
2. the method according to claim 1, wherein step S2 specifically includes following sub-step:
The geodetic coordinates of the corner reflector point is transformed into the earth's core rectangular space coordinate by S2.1;Specifically, according to following formula (1) geodetic coordinates of corner reflector point is converted into the earth's core rectangular space coordinate:
Wherein, (XP,YP,ZP) be corner reflector point the earth's core rectangular space coordinate, N is radius of curvature in prime vertical, (B, L, H) point Not Wei corner reflector point geodetic coordinates latitude, longitude and geodetic height, e be reference ellipsoid the first eccentricity;
S2.2 calculates the corresponding satellite imagery moment t of main image;Specifically, calculating the satellite imagery moment according to following formula (2) T:
T=t0+Δt·y (2)
Wherein, t0For the starting imaging time of main image, that is, the imaging time of the first row image, Δ t are imaging time interval, y The orientation coordinate for being corner reflector in radar fix system;
S2.3, the second order polynomial being fitted between satellite position vectors and the satellite imagery moment t, the second order polynomial is such as Shown in formula (3):
Wherein, (XS,YS,ZS) it is satellite position vectors to be fitted,Second order polynomial system is fitted for position Matrix number;
S2.4, the second order polynomial being fitted between satellite velocity vector and the satellite imagery moment t, the second order polynomial is such as Shown in formula (4):
Wherein,For satellite velocity vector to be fitted, t is the satellite imagery moment,For speed It is fitted second order polynomial coefficient matrix;
S2.5 constructs the tight imaging model of SAR, shown in the tight imaging model of SAR such as formula (5):
Wherein, VxS·(XS-XP)+VyS·(YS-YP)+VzS·(ZS-ZP)=- fd·λ·(R0+ Δ Rx)/2 be Doppler side Journey,For range equation, (XS,YS,ZS) it is that satellite position is sweared Amount, (XP,YP,ZP) be corner reflector point the earth's core rectangular space coordinate,For satellite velocity vector, x is that angle is anti- The distance of emitter point is to coordinate, R0For perigee oblique distance, Δ R is range resolution, fdFor doppler centroid, λ is radar Wavelength;
S2.6, the tight imaging model of SAR constructed using iterative method process of solution S2.5 solve the initial pixel of corner reflector point Coordinate;
S2.7 repeats step S2.1-S2.6, calculates the initial pixel coordinate of all corner reflector points, obtain all corner reflectors Set { the P of the initial pixel coordinate of pointinit}。
3. according to the method described in claim 2, it is characterized in that, step S2.6 specifically includes following sub-step:
S2.6.1, the initial orientation of the Doppler equation iterative calculation corner reflector point based on formula (5) is to coordinate y;
S2.6.2 calculates the initial distance of corner reflector point to coordinate x according to the range equation of formula (5).
4. according to the method described in claim 3, it is characterized in that, the step S2.6.1 includes following sub-step:
S2.6.1.1, the pixel coordinate initial value of set angle reflector points, wherein the pixel coordinate initial value set is SAR main image Central point (xcenter,ycenter);
S2.6.1.2, according to the orientation y of corner reflector pointcenterCalculate satellite imagery time ty, and calculated by polynomial fitting Corresponding satellite sensor position vector (XS,YS,ZS) and velocity vector
S2.6.1.3 utilizes satellite sensor position vector (XS,YS,ZS), velocity vectorWith corner reflector point Set vector (XP,YP,ZP), doppler values f is calculated according to Doppler equationd, while doppler changing rate f is calculatedd', In conjunction with Doppler measurement fdop, the time variable Δ t for needing to adjust is calculated according to the following formula:
Δ t=- (fd-fdop)/fd';
S2.6.1.4, using time variable Δ t to satellite imagery time tyIt modifies, obtains new imaging moment ty+1=ty+ Δt;
S2.6.1.5 utilizes ty+1The iterative calculation for carrying out step S2.6.1.2-S2.6.1.4, until Δ t absolute value is both less than threshold Value ε, iteration terminate, and export main image imaging moment ty
S2.6.1.6, by the main image imaging moment t resolvedy, the orientation coordinate y:y=of corner reflector point is calculated according to the following formula (ty-t0)/Δty, wherein t0For initial time, Δ tyFor orientation time interval.
5. according to the method described in claim 3, it is characterized in that, the step S2.6.2 includes following sub-step:
S2.6.2.1, the initial orientation of the corner reflector point obtained from step S2.6.1 are defended to coordinate y according to the calculating of following formula Star sensor position vector (XS,YS,ZS):
Wherein, t is the current calculating main image corresponding satellite imagery moment, and t0 is the starting imaging time of main image, that is, first The imaging time of row image, Δ t are imaging time interval, and y is orientation coordinate of the corner reflector point in radar fix system, (XS,YS,ZS) it is satellite position vectors to be fitted,Second order polynomial coefficient matrix is fitted for position;
S2.6.2.2, by range equationIn conjunction with corner reflection Position vector (the X of device pointP,YP,ZP), perigee oblique distance R0, range resolution Δ R, calculate corner reflection point initial distance to Coordinate x.
6. the method according to claim 1, wherein step S3 specifically includes following sub-step:
S3.1, with the initial pixel coordinate (x of the corner reflector point obtained by step S20,y0) centered on, obtain window size For the complex matrix of W1 × W1;
S3.2 calculates the W1×W1Complex matrix amplitude, obtain the first amplitude value matrix;
Specifically, calculating the W according to following formula1×W1Complex matrix amplitude:
Wherein, Re (i, j) indicates W1×W1Complex matrix pixel value (i, j) real part, Im (i, j) indicate W1×W1Plural number The imaginary part of the pixel value (i, j) of matrix;
S3.3 finds out the peak swing value in the first amplitude value matrix, and calculates the peak swing value point in master Pixel coordinate in image
S3.4, with the pixel coordinateCentered on, acquisition window size is W2 × W2Complex matrix;
S3.5 calculates the W2×W2Complex matrix amplitude, obtain the second amplitude value matrix;
Specifically, calculating the W according to following formula2×W2Complex matrix amplitude:
Wherein, Re (i, j) indicates W2×W2Complex matrix pixel value (i, j) real part, Im (i, j) indicate W2×W2Plural number The imaginary part of the pixel value (i, j) of matrix;
S3.6 carries out N to the amplitude in the second amplitude value matrix1Times over-sampling, obtains inverse-transform matrix, and calculate institute The amplitude for stating each pixel in inverse-transform matrix obtains third amplitude value matrix;
S3.7 finds out the peak swing value in the third amplitude value matrix, and calculates the peak swing value point in master Pixel-level coordinate (x in image1,y1);
S3.8 repeats step S3.1-S3.7 to all corner reflector points, obtains pixel of all corner reflector points in main image Grade coordinate set { (Ppix)}。
7. according to the method described in claim 6, it is characterized in that, W2 < W1.
8. according to the method described in claim 6, it is characterized in that, step S3.6 specifically includes following sub-step:
S3.6.1 carries out Fast Fourier Transform (FFT) to the amplitude in the second amplitude value matrix, goes to complex number space;
S3.6.2 carries out N to the amplitude in the second amplitude value matrix for going to complex number space1Times over-sampling;
S3.6.3 calculates the amplitude of each pixel in the inverse-transform matrix, obtains third amplitude value matrix;
Specifically, calculating the amplitude of the inverse-transform matrix according to following formula:
Wherein, Re (i, j) indicates that the real part of the pixel value (i, j) of the inverse-transform matrix, Im (i, j) indicate the inverse transformation square The imaginary part of the pixel value (i, j) of battle array.
9. the method according to claim 1, wherein step S4 specifically includes following sub-step:
S4.1 is based on the tight imaging pattern of SAR and interferometric phase equation, determines the Three-dimensional Gravity modeling based on visual direction amount Orthogonal Decomposition Type, wherein the Three-dimension Reconstruction Model are as follows:
Wherein, P is coordinate of ground point, S1For satellite position vectors, r1For visual direction amount,Indicate the corresponding satellite position of main image To the distance of ground target,For visual direction amount r1The unit visual direction amount obtained after normalizing;
S4.2 calculates the satellite position vectors S1, calculation formula isWherein, (XS,YS,ZS) it is satellite position vectors to be fitted,It is fitted second order polynomial coefficient matrix for position, t is The main image corresponding satellite imagery moment;
S4.3, the distance of the corresponding satellite position of calculating main image to ground targetWith unit visual direction amount
S4.4, the corresponding satellite position vectors S of main image obtained based on step S4.21, step S4.3 obtainsWithIt pushes away Export the Three-dimension Reconstruction Model are as follows:
Wherein, P is coordinate of ground point, S1For satellite position vectors, r1For visual direction amount, rs1Indicate the corresponding satellite of main image to ground The distance of appearance punctuate, For transition matrix,For unit visual direction amount, rvFor unit visual direction amountIn V axis On component, rnFor unit visual direction amountComponent on N axis, rwFor unit visual direction amountComponent on W axis, b are interference base Line vector, v are carrier aircraft flight velocity vector,For the unit vector of flying speed, λ is radar wavelength,Absolutely to interfere phase Position, Q is antenna operating mode, | bpv|, b long for the baseline vector mould perpendicular to satellite velocities directionvIt is baseline vector in satellite The projection value of directional velocity, fdopFor doppler centroid, B is that interference baseline mould is long;
S4.5 rewrites the Three-dimension Reconstruction Model of step S4.4, obtains the observational equation of Three-dimension Reconstruction Model:
In formula, Fx(X)、Fy(X)、FzIt (X) is respectively the function of X, Y, Z-direction,It is to be optimized Interference system parameter, blFor baseline length, bαFor baseline angle, r0For nearly oblique distance,For interferometric phase, fdFor Doppler's frequency Rate;S1x,S1y,S1zFor the corresponding coordinate of the satellite position of main image,For main image to the oblique distance of ground corner reflector, u11,u12, u13,u21,u22,u23,u31,u32,u33For spin matrixCalculating variable;rvFor unit visual direction amountComponent on V axis, rn For unit visual direction amountComponent on N axis, rwFor unit visual direction amountComponent on W axis;Px,Py,PzFor the ground of corner reflector Heart rectangular space coordinate;
S4.6 calculates the partial derivative of the interference system parameter in the observational equation of the Three-dimension Reconstruction Model;
S4.7, based on the partial derivative of the obtained observational equation of step S4.5 and S4.6 the interference system parameter derived, utilization is several A corner reflector point carries out Nonlinear least squares fitting to interference system parameter;
S4.8 resolves the interference system parameter based on the principle of least square;
S4.9, the interference system parameter obtained using step S4.8, interferes the corner reflector picpointed coordinate in step S3 Parameter measurement verification, rejects the biggish corner reflector point of error, and obtain model optimization parameter;
S4.10, the model optimization parameter obtained using step S4.9.5, including baseline length bl, baseline angle bα, nearly oblique distance r0、 Interferometric phaseWith Doppler frequency fd, corner reflector point is recalculated according to the tight imaging model of SAR that abovementioned steps S2 is constructed Image picpointed coordinate, final corner reflector point coordinate be (x2,y2), and all coordinate sets are denoted as { Ppix_f}。
10. a kind of computer-readable medium, there is computer program in medium, the computer program is wanted for executing such as right Seek method described in any one of 1-9.
CN201611180810.5A 2016-12-19 2016-12-19 Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR Active CN106526593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611180810.5A CN106526593B (en) 2016-12-19 2016-12-19 Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611180810.5A CN106526593B (en) 2016-12-19 2016-12-19 Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR

Publications (2)

Publication Number Publication Date
CN106526593A CN106526593A (en) 2017-03-22
CN106526593B true CN106526593B (en) 2019-01-11

Family

ID=58340373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611180810.5A Active CN106526593B (en) 2016-12-19 2016-12-19 Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR

Country Status (1)

Country Link
CN (1) CN106526593B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107367716B (en) * 2017-07-04 2019-09-10 武汉大学 A kind of high-precision satellite-borne SAR geometric calibration method
CN109800380B (en) * 2019-01-30 2023-03-17 上海卫星工程研究所 Method for constructing rigorous imaging geometric model for ground detection of satellite-borne microwave remote sensing instrument
CN109993795B (en) * 2019-04-08 2021-04-09 中国科学院电子学研究所 Method, device, electronic equipment and medium for resolving SAR image ground position
CN110146858B (en) * 2019-05-24 2021-10-29 北京航空航天大学 High-precision full-link spaceborne SAR radiometric calibration simulation method
CN110398477B (en) * 2019-07-03 2022-06-24 香港光云科技有限公司 Material classification and identification method
CN110244302B (en) * 2019-07-05 2023-02-17 苏州科技大学 Three-dimensional transformation method for image pixel coordinates of foundation synthetic aperture radar
CN112161632B (en) * 2020-09-23 2022-08-12 北京航空航天大学 Satellite formation initial positioning method based on relative position vector measurement
CN113391310A (en) * 2021-06-11 2021-09-14 辽宁工程技术大学 Corner reflector point automatic extraction method based on system geometric error compensation
CN117665818B (en) * 2024-02-02 2024-04-12 北京东方至远科技股份有限公司 Plane position correction method and system for synthetic aperture radar satellite

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2284569B1 (en) * 2009-07-16 2013-05-01 EADS Deutschland GmbH Automatic focussing of SAR raw data based on the estimation of the phase error function
CN103576149A (en) * 2013-06-05 2014-02-12 河海大学 Foundation interference radar three-dimensional deformation extraction method based on amplitude information
CN105182339A (en) * 2015-09-25 2015-12-23 昆明理工大学 Method for correcting environmental influences at slope deformation monitoring on the basis of corner reflector
CN105783754A (en) * 2016-03-14 2016-07-20 河海大学 Three-dimensional-laser-scanning-based GBInSAR three-dimensional displacement field extraction method
CN105842668A (en) * 2016-03-22 2016-08-10 中国科学院电子学研究所 Circular SAR-based dihedral corner reflector optimal imaging azimuth determining method
CN106093938A (en) * 2016-05-17 2016-11-09 长安大学 A kind of mining area based on manual corner reflector side-play amount deformation monitoring method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2284569B1 (en) * 2009-07-16 2013-05-01 EADS Deutschland GmbH Automatic focussing of SAR raw data based on the estimation of the phase error function
CN103576149A (en) * 2013-06-05 2014-02-12 河海大学 Foundation interference radar three-dimensional deformation extraction method based on amplitude information
CN105182339A (en) * 2015-09-25 2015-12-23 昆明理工大学 Method for correcting environmental influences at slope deformation monitoring on the basis of corner reflector
CN105783754A (en) * 2016-03-14 2016-07-20 河海大学 Three-dimensional-laser-scanning-based GBInSAR three-dimensional displacement field extraction method
CN105842668A (en) * 2016-03-22 2016-08-10 中国科学院电子学研究所 Circular SAR-based dihedral corner reflector optimal imaging azimuth determining method
CN106093938A (en) * 2016-05-17 2016-11-09 长安大学 A kind of mining area based on manual corner reflector side-play amount deformation monitoring method

Also Published As

Publication number Publication date
CN106526593A (en) 2017-03-22

Similar Documents

Publication Publication Date Title
CN106526593B (en) Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR
CN107167786B (en) Method for auxiliary extraction of elevation control points from satellite laser height measurement data
WO2022214114A2 (en) Bridge deformation monitoring method fusing gnss data and insar technology
CN102693542B (en) Image characteristic matching method
CN104820217A (en) Calibration method for multi-element linear array detection imaging laser radar with multiple normal planes
CN105205858A (en) Indoor scene three-dimensional reconstruction method based on single depth vision sensor
CN103093459B (en) Utilize the method that airborne LiDAR point cloud data assisted image mates
CN106960174A (en) High score image laser radar vertical control point is extracted and its assisted location method
CN105510913A (en) Heterogeneous optical and SAR remote sensing image combined positioning method based in similar optical image space correction
CN110378001A (en) A kind of Pillarless caving remote sensing satellite geometric positioning accuracy analysis method
CN108414998A (en) A kind of laser satellite altitude meter echo waveform analog simulation method and equipment
Jebur et al. Assessing the performance of commercial Agisoft PhotoScan software to deliver reliable data for accurate3D modelling
Taylor et al. Automatic calibration of multi-modal sensor systems using a gradient orientation measure
CN103129752A (en) Dynamic compensation method for attitude angle errors of optical remote sensing satellite based on ground navigation
CN116625354B (en) High-precision topographic map generation method and system based on multi-source mapping data
CN110363758B (en) Optical remote sensing satellite imaging quality determination method and system
CN104567801B (en) High-precision laser measuring method based on stereoscopic vision
CN112597428A (en) Flutter detection correction method based on beam adjustment and image resampling of RFM model
Wang et al. Joint estimation of absolute attitude and size for satellite targets based on multi-feature fusion of single ISAR image
CN106323271B (en) Spacecraft relative attitude based on feature singular value measures vector choosing method
CN105469391A (en) Cloud shadow detection method and cloud shadow detection system
Durand et al. Qualitative assessment of four DSM generation approaches using Pléiades-HR data
CN116299319A (en) Synchronous scanning and point cloud data processing method of multiple laser radars and radar system
Rodarmel et al. Rigorous error modeling for sUAS acquired image-derived point clouds
Kupervasser et al. Robust positioning of drones for land use monitoring in strong terrain relief using vision-based navigation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100048 No.1 Hospital of Baisheng Village, Zizhuyuan, Haidian District, Beijing

Patentee after: Ministry of Natural Resources Land Satellite Remote Sensing Application Center

Address before: 100048 No.1 Hospital of Baisheng Village, Zizhuyuan, Haidian District, Beijing

Patentee before: Satellite Surveying and Mapping Application Center, NASG

CP01 Change in the name or title of a patent holder