CN103278140A - Coordinate back calculation method for TDICCD (time delay and integration charge coupled devices) linear array push-sweep sensor - Google Patents

Coordinate back calculation method for TDICCD (time delay and integration charge coupled devices) linear array push-sweep sensor Download PDF

Info

Publication number
CN103278140A
CN103278140A CN2013102019718A CN201310201971A CN103278140A CN 103278140 A CN103278140 A CN 103278140A CN 2013102019718 A CN2013102019718 A CN 2013102019718A CN 201310201971 A CN201310201971 A CN 201310201971A CN 103278140 A CN103278140 A CN 103278140A
Authority
CN
China
Prior art keywords
coordinate
raw video
virtual
virtual image
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102019718A
Other languages
Chinese (zh)
Other versions
CN103278140B (en
Inventor
金淑英
王密
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201310201971.8A priority Critical patent/CN103278140B/en
Publication of CN103278140A publication Critical patent/CN103278140A/en
Application granted granted Critical
Publication of CN103278140B publication Critical patent/CN103278140B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a coordinate back calculation method for a TDICCD (time delay and integration charge coupled devices) linear array push-sweep sensor. The method comprises the following steps of: 1, according to a geometric model M, carrying out back calculation on an object space coordinate Pi (Bi, Li and H0) to obtain a virtual image coordinate (sample i and vline i); 2, computing a raw image line number line i; 3, computing an object space coordinate P' (B' and L') according to an object space elevation H0 in a raw image coordinate p (sample i and vline i) and an object space coordinate true-value P0; and 4, comparing the object space coordinate P' (B' and L') computed in the step 3 with the object space coordinate true-value P0 (B0 and L0) to judge whether iteration end conditions are met or not, and if so, outputting the raw image coordinate p (sample i and line i), otherwise, enabling i to be equal to i+1, computing the object space coordinate Pi which is equal to P0*2-P', and returning to continue iteration.

Description

Be used for the TDICCD linear array push and sweep the coordinate reverse calculation algorithms of sensor
Technical field
The invention belongs to space flight and photogrammetric measurement field, relate to the collinearity equation model that utilizes the TDICCD linear array push to sweep sensor and carry out the coordinate time from object coordinates inverse picture side, a kind of high-precision coordinate reverse calculation algorithms based on capable number of virtual image.
Background technology
TDICCD(time delay integration charge-coupled image sensor) be the main sensors of obtaining the high-resolution optical satellite image at present.Can adjust in real time the capable integral time of TDICCD in the push-scanning image process, and this has compensated the image drift campaign on the one hand, makes the sharpness of TDICCD image be better than traditional C CD(charge coupled cell) image; Also make the smooth degree of line array sensor collinearity equation model descend on the other hand, make that rational polynominal model fitting precision is not enough, or do not restrain based on collinearity equation formula iteration from object coordinates inverse picture side coordinate process, cause the geometry quality of image product to reduce.
TDICCD linear array push-scanning image synoptic diagram as shown in Figure 1, St are t projection centre constantly, p(sample, line) picture side's coordinate of expression point, P(X, Y, Z) object coordinates of expression point.In the geometric manipulations process of TDICCD push-scanning image data, need to use in large quantities the positive inverse transformation of coordinate.Wherein, be basis from object coordinates inverse picture side coordinate based on the just calculation process of coordinate of collinearity equation model.General coordinate inverse preprocessing process is as follows: at first just calculating on the basis at collinearity equation model coordinate, calculate picture side and the object coordinates at one group of virtual three-dimensional graticule mesh reference mark, determine the three-dimensional polynomial expression of geometric model M(or the rational polynominal of the coordinate from object coordinates to picture side again); Utilize these virtual three-dimensional graticule mesh reference mark to resolve the parameter of the geometric model M of the coordinate from object coordinates to picture side at last.General coordinate reverse calculation algorithms is divided into two classes, one is based on rational polynominal directly calculates, and two are based on collinearity equation formula and three-dimensional polynomial expression carries out iterative computation (needing iteration to be because the coordinate the unknown of picture side, imaging the unknown constantly, elements of exterior orientation are unknown).In iterative process, need utilize three-dimensional polynomial expression to provide comparatively accurate picture side coordinate initial value, to reduce iterations, control iterative process, to prevent that iteration from not restraining.This shows no matter adopt which kind of method to carry out the coordinate inverse, geometric model M is absolutely necessary.
The fitting precision of geometric model M depends on the smooth degree of line array sensor collinearity equation model.When saltus step takes place capable integral time in TDICCD, the smooth degree of line array sensor collinearity equation model will reduce greatly, geometric model M(comprises rational polynominal and three-dimensional polynomial expression) fitting precision also and then reduce, finally make coordinate inverse precision not satisfy request for utilization.Therefore, in the geometric manipulations process of TDICCD push-scanning image data, as sensor calibration, orthorectify etc., all press for a kind of high-precision coordinate reverse calculation algorithms.
Summary of the invention
Problem to be solved by this invention is when TDICCD saltus step takes place at capable integral time, how to realize high-precision coordinate inverse based on the collinearity equation model, to improve the geometric accuracy of image product.
Technical scheme of the present invention is a kind of coordinate reverse calculation algorithms of sweeping sensor for the TDICCD linear array push, carry out raw video coordinate inverse based on virtual image, described virtual image be one capable integral time invariable image, in fixing a period of time, row integral time and the image line number relation of being inversely proportional to; The inverse implementation is as follows,
The object coordinates when if Pi represents that the i time iteration begins, wherein during i=0, P0(B0, L0 is the object coordinates actual value of outside input H0), (B0 L0) is geographical latitude and longitude coordinates, and H0 is the object space elevation; P(sample i, line i) be the raw video coordinate figure of the i time iteration output, sample iBe raw video row number, line iBe capable number of raw video; Initialization i=0, it is as follows to carry out iterative process,
Step 1, according to geometric model M, from object coordinates Pi(Bi, Li, H0) inverse obtains virtual image coordinate (sample i, vline i);
The form of described geometric model M is three-dimensional polynomial expression or rational polynominal, and the coefficient of geometric model M obtains by least square fitting according to the virtual three-dimensional graticule mesh reference mark coordinate of virtual image;
Step 2 is according to the row vline in the virtual image coordinate iCalculate raw video row line i, implementation is as follows,
Virtual image is identical with the row, column number of raw video, and then capable integral time of the dt of virtual image equals the assembly of raw video as the total line number of time divided by raw video; Earlier according to the relation between dummy row number and the imaging moment, obtain dummy row vline iCorresponding imaging is t constantly Vline=start_time+vline i* dt, start_time are the imaging moment of first trip image; Obtain the constantly corresponding raw video row line of imaging according to capable number of raw video with the relation between the imaging constantly again i
Step 3 is according to raw video coordinate p(sample i, line i) and object coordinates actual value P0 in object space elevation H0 calculate object coordinates P'(B', L');
Step 4, the object coordinates P'(B' that comparison step 3 calculates, L') Yu among the object coordinates actual value P0 (B0 L0), judges whether to satisfy the iteration termination condition, is then to export raw video coordinate p(sample i, line i); Otherwise, make i=i+1, calculate new object coordinates Pi=P0 * 2-P' then, and turn back to step 1 continuation iteration.
And, obtaining the coefficient of geometric model M according to the virtual three-dimensional graticule mesh reference mark coordinate of virtual image by least square fitting, implementation is as follows,
Virtual three-dimensional graticule mesh reference mark is divided and calculated to the three-dimensional graticule mesh of virtual image division rule; Determine the form of the geometric model M of coordinate from object coordinates to virtual representation side, the initial value of given model coefficient X carries out iteration according to the following step,
1) according to the least square iteration theorem, utilize virtual three-dimensional graticule mesh reference mark that model coefficient X is listed linearizing error equation v=Ax-l, wherein x is the correction vector of model coefficient, and A is the matrix of coefficients of x, and l is virtual representation side's error of coordinate vector;
2) find the solution the normal equation of error equation with least square method, obtain the correction x=(A of model coefficient TA) -1(A TL);
3) upgrade model coefficient X '=X+x;
4) judging whether to satisfy the iteration termination condition, is termination of iterations output model coefficient X ' then; Otherwise, utilize 3) and model coefficient X ' that gained is new returns 1) iteration carries out.
Just can realize calculating high-precision raw video coordinate from object coordinates by above step.This method has just been utilized the concept of virtual image, and does not need to export real virtual image, only need just can realize simply fast just calculating on the basis based on the coordinate of collinearity equation.This method is applicable to that line frequency has the coordinate inverse of the TDICCD image of saltus step, has solved the problem that run into of TDICCD sensor in the high precision geometric manipulations conscientiously, has improved the geometry quality of image product.
Description of drawings
Fig. 1 is TDICCD linear array push-scanning image synoptic diagram of the prior art;
Fig. 2 is the dummy row vline of the embodiment of the invention and the corresponding relation synoptic diagram between the raw video row line;
Fig. 3 is the imaging light that passes through picture point p and a plurality of virtual elevation face H of the embodiment of the invention k(k=1,2 ...) crossing synoptic diagram;
Fig. 4 is that the geometric model M of the embodiment of the invention is used for calculating virtual image coordinate initial value synoptic diagram;
Fig. 5 finds the solution the iterative process synoptic diagram of raw video coordinate for the embodiment of the invention from object coordinates;
Fig. 6 is the iterative process synoptic diagram of the coefficient of finding the solution geometric model M of the embodiment of the invention;
Fig. 7 finds the solution the iterative process figure of raw video coordinate for the embodiment of the invention from object coordinates.
Embodiment
A kind of TDICCD of being applicable to linear array push that the present invention proposes is swept sensor based on the high-precision coordinate reverse calculation algorithms of dummy row number, problem to be solved is to sweep in the imaging process how to realize high-precision image coordinate inverse under the line frequency generation saltus step situation when the TDICCD linear array push, eliminates initial value out of true in the coordinate inverse iterative process, iterations simultaneously too much and iteration convergence problem not.
Describe technical solution of the present invention in detail below in conjunction with drawings and Examples.
Embodiment has the TDICCD raw video of saltus step at line frequency, can adopt computer software technology to realize automatic operational scheme.
The present invention at first describes each step specific implementation.
(1) sets up virtual image, realize the conversion from the virtual image coordinate to the raw video coordinate.
The present invention introduces the concept of virtual image here, and calculates the capable integral time of virtual image, realizes the positive inverse of virtual image coordinate and raw video coordinate.
Because there is line frequency saltus step (that is: go saltus step integral time) in raw video, make the every capable image of raw video not of uniform size the causing in ground (referring to accompanying drawing 2) along correspondence on the rail direction, finally cause from the variety of problems of object coordinates inverse picture side coordinate.For addressing these problems, the concept of virtual image is proposed.Virtual image be one capable integral time invariable image, in fixing a period of time, row integral time and the picturedeep relation of being inversely proportional to.Can (but being not limited to) calculate so the capable integral time of virtual image: suppose that virtual image is identical with the row, column number of raw video, then the assembly that equals raw video capable integral time of virtual image is as the total line number of time divided by raw video.
dt = end _ time - start _ time h
Wherein, start_time is that the imaging moment of first trip image, the imaging moment, the h that end_time is the footline image are total line number of image, and w is total columns of image.
Arranged capable integral time, and the relation between the row of virtual image number and the imaging has constantly just been set up.Suppose that the row number of virtual image is for vline(0≤vline≤h-1), its corresponding imaging is t constantly Vline, then:
t vline=start_time+vline×dt;vline=(t vline-start_time)/dt
The row of wide identical, the coordinate of virtual image and raw video are number identical, and therefore only row number needs convert.It below is the corresponding relation conversion between virtual image row vline and the raw video row line.
As follows to the conversion process of raw video row line from virtual image row vline:
According to the relation between capable number of virtual image and the imaging constantly, obtain the imaging moment t of virtual image row vline correspondence earlier Vline=start_time+vline * dt; Again according to the relation between capable number of raw video and the imaging constantly (for example: piecewise linear interpolation), obtain the imaging raw video row line of correspondence constantly.So just realized the conversion from virtual image row vline to raw video row line.
As follows to the conversion process of virtual image row vline from raw video row line:
Earlier according to the relation between capable number of raw video and the imaging constantly (for example: piecewise linear interpolation), obtain the imaging moment t of the former number of beginning line correspondence Line', according to the relation between capable number of virtual image and the imaging constantly, obtain virtual image row vline=(t then Line'-start_time)/dt.So just realized the conversion from raw video row line to virtual image row vline.
(2) (sample line) is just calculating relational expression to the coordinate of object coordinates from the raw video coordinate in foundation.Coordinate according to the collinearity equation principle is just calculated, and is existing algorithm.For the purpose of reference, be described as follows:
At first, according to the row sample in the raw video coordinate and sensor elements of interior orientation, obtain the vector u=(g1 (sample) g2 (sample) 1) of imaging light in camera coordinates system T, wherein sample is for visiting the unit's numbering row of raw video coordinate (be number), g1 (sample), and g2 (sample) is the tangent value at two visual angles of ray vectors in camera coordinates system of trying to achieve by the elements of interior orientation model.
Then, according to the row line in the raw video coordinate and imaging constantly, relation between the line frequency, obtain picture point p (sample, line) corresponding imaging t constantly, and inscribe the rotation matrix R the solid WGS84 rectangular coordinate system from Earth central inertial rectangular coordinate system J2000 to the ground heart when obtaining t C2t(t).
According to the orbital-elements measurement data, set up the time dependent model of satellite orbit parameter (as cubic polynomial), the mode of specifically setting up is prior art.According to the imaging moment t of picture point p, interpolation goes out the orbit parameter in this moment: satellite position (Xs Ys Zs) again TAnd satellite velocities (Xvs Yvs Zvs) T, and then inscribe from local orbit coordinate system-OXgYgZg to the rotation matrix R the Earth central inertial rectangular coordinate system J2000 when obtaining t GF(t).
According to the attitude parameter measurement data, set up the time dependent model of attitude of satellite parameter (as an order polynomial) again, the mode of specifically setting up is prior art.According to the imaging of picture point p t constantly, interpolation goes out the attitude parameter in this moment: roll again, pitch, and yaw, and then inscribe when obtaining t from the satellite body coordinate and be tied to rotation matrix R between the local orbit coordinate system-OXgYgZg FB(t).
At last, consider sensor established angle parameter, suppose that the rotation matrix that is tied between the satellite body coordinate system from camera coordinates is R BS, then cross the imaging line vector that p orders and satisfy following collinearity equation formula:
X Y Z = Xs Ys Zs t + m R c 2 t ( t ) R GF ( t ) R FB ( t ) R BS g 1 ( sample ) g 2 ( sample ) 1
Order:
R(t)=R c2t(t)R GF(t)R FB(t)
u → = g 1 ( sample ) g 2 ( sample ) 1
Then the ray vectors by picture point p can be expressed as in the solid rectangular coordinate system of WGS84 ground heart:
X ‾ Y ‾ Z ‾ = R ( t ) R BS u →
Obtain:
X Y Z = Xs Ys Zs t + m X ‾ Y ‾ Z ‾ - - - ( 1 )
Can get scale-up factor m and satisfy following equation:
m = X - Xs X ‾ = Y - Ys Y ‾ = Z - Zs Z ‾ - - - ( 2 )
From picture point p(sample, line) to object point P(X, Y, light Z) and earth surface are crossing, the major semi-axis of supposing the WGS84 ellipsoid is a, minor semi-axis is b, the ellipsoid height of intersection point P is H, then the P object coordinates of ordering (X, Y Z) satisfy following formula:
X 2 + Y 2 ( a + H ) 2 + Z 2 ( b + H ) 2 = 1 - - - ( 3 )
With (2) substitution (3), obtain the quadratic equation with one unknown about m behind the abbreviation:
[ X ‾ 2 + Y ‾ 2 ( a + H ) 2 + Z ‾ 2 ( b + H ) 2 ] × m 2 + 2 × [ Xs X ‾ + Ys Y ‾ ( a + H ) 2 + Zs Z ‾ ( b + H ) 2 ] × m + [ Xs 2 + Ys 2 ( a + H ) 2 + Zs 2 ( b + H ) 2 ] = 1 - - - ( 4 )
Solve m from (4), substitution (1) solve P object coordinates (X, Y, Z), again with geocentric rectangular coordinate (X, Y, Z) be converted to geographical latitude and longitude coordinates (B, L).Therefore, under the situation of given object space elevation H, can from image coordinate (sample, line) calculate a little object coordinates (X, Y, Z) or (B, L).
Comprehensive above-mentioned 2 processes, can realize from the virtual image coordinate (sample, vline) to the just calculation process of object coordinates:
At first, according to (one), according to the relation between capable number of capable number of virtual image and the raw video, calculate raw video row line from virtual image row vline;
Then, according to (two), just calculating relational expression according to the coordinate of collinearity equation, from the raw video coordinate (sample, line) and object space elevation H, can calculate a little object coordinates (X, Y, Z) or (B, L).
(3) based on the virtual three-dimensional graticule mesh reference mark coordinate of virtual image, find the solution geometric model M, realize the inverse from object coordinates to the virtual image coordinate.The division that embodiment carries out the virtual three-dimensional regular grid is based on virtual image, rather than based on raw video.Make the precision height of geometric model M like this, and make the iterative process iterations of inverse few, avoid not restraining.Can adopt following 3 steps during concrete enforcement:
(1) divides and calculates virtual three-dimensional graticule mesh reference mark.
At first to the three-dimensional graticule mesh of virtual image division rule.The pixel size of supposing virtual image be w * h(wherein wide w is identical with raw video, it is identical with raw video that high h also can [but being not limited to]), be BlockSize if block size is divided on the plane, then graticule mesh is counted and is Nx * Ny, wherein: Nx = ( int ) ( w BlockSize ) + 1 , Ny = ( int ) ( h BlockSize ) + 1 , (int) expression rounds downwards.About the division of elevation face, suppose that maximum elevation is Hmax, minimum elevation is Hmin, and elevation face number is Nz, and then the height value of each elevation face is
Figure BDA00003255499300063
I=0 wherein, 1 ..., Nz-1, Nz 〉=3.
According to (one) and (two), calculate Nx * Ny virtual grid points (sample again a, vline b) with the crossing object coordinates of Nz elevation face H (B, L).Sample wherein aRepresent that the capable b of a is listed as the row number of virtual grid points, vline bRepresent that the capable b of a is listed as the row number of virtual grid points, obtains one group of virtual graticule mesh reference mark (sample like this aVline bH kB AbkL Abk), a=0 wherein, 1 ... Nx-1, b=0,1 ... Ny-1, k=0,1 ... Nz-1(is referring to accompanying drawing 3).
In the accompanying drawing 3, St is t projection centre constantly, p(sample a, vline b) be the virtual image coordinate, (B AbkL AbkH k) (k=1,2 ...) for passing through picture point p(sample a, vline b) light and a plurality of elevation face H k(k=1,2 ...) the crossing object coordinates that obtains.In figure, with elevation face H 1Intersect the object coordinates (B that obtains Ab1L Ab1H 1), with elevation face H 2Intersect the object coordinates (B that obtains Ab2L Ab2H 2).
(2) determine the concrete form of the approximate geometry model M of coordinate from object coordinates to virtual representation side.
At first determine the form of the approximate geometry model M of coordinate from object coordinates to virtual representation side.Form commonly used has two kinds, and a kind of is three-dimensional multinomial model, and another kind is the rational polynominal model, and wherein the Chang Yong degree of polynomial is 3 times.Formula is as follows:
Three-dimensional polynomial expression:
sample i′=a 0+a 1L′ i+a 2B′ i+a 3H′ i+...;vline′ i=b 0+b 1L′ i+b 2B′ i+b 3H′ i+...(5)
Wherein, a 0, a 1, a 2B 0, b 1, b 2Be coefficient.
Rational polynominal:
sample i ′ = a 0 + a 1 L i ′ + a 2 B i ′ + a 3 H i ′ + . . . 1 + c 1 L i ′ + c 2 B i ′ + c 3 H i ′ + . . . ; vline i ′ = b 0 + b 1 L i ′ + b 2 B i ′ + b 3 H i ′ + . . . 1 + d 1 L i ′ + d 2 B i ′ + d 3 H i ′ + . . . - - - ( 6 )
Wherein: (sample ' i, vline ' i) be normalized virtual representation side coordinate, (L ' i, B ' i, H ' i) be normalized object coordinates.a 1, a 2, a 3B 1, b 2, b 3C 1, c 2, c 3D 1, d 2, d 3Be coefficient.
(3) find the solution the coefficient of the approximate geometry model M of the coordinate from object coordinates to virtual representation side.
After geometric model M determines, need utilize above-mentioned steps (1) gained virtual three-dimensional graticule mesh reference mark coordinate to resolve model coefficient.Embodiment adopts three-dimensional multinomial model.
At first given model coefficient X=[a 0a 1a 2B 0b 1b 2] TInitial value (can be 0 entirely), carry out iteration (referring to accompanying drawing 6) according to the following step:
1) according to the least square iteration theorem, utilize virtual three-dimensional graticule mesh reference mark to list linearizing error equation: v=Ax-l; Wherein x is the correction vector of model coefficient, and A is the matrix of coefficients of x, and l is virtual representation side's error of coordinate vector, and n is the number at virtual three-dimensional graticule mesh reference mark, and m is the number of unknown number.Embodiment sets up corresponding matrix A 2n * m, vector x M * 1And l 2n * 1As follows:
A 2 n × m = 1 L 1 ′ B 1 ′ H 1 ′ · · · 1 L 1 ′ B 1 ′ H 1 ′ · · · · · · 1 L 2 ′ B 2 ′ H 2 ′ · · · 1 L 2 ′ B 2 ′ H 2 ′ · · · · · · · · ·
x m × 1 = a 0 a 1 a 2 · · · b 0 b 1 b 2 · · · T
Figure BDA00003255499300075
Wherein
Figure BDA00003255499300076
It is the valuation from the individual picture side, virtual graticule mesh reference mark of the i coordinate that formula (5) calculates.
2) under the least square method criterion, v TV is minimum, and the error equation methodization is found the solution, and obtains the correction of model coefficient: x=(A TA) -1(A TL);
3) upgrade model coefficient X '=X+x;
4) judge whether to satisfy the iteration termination condition: the iteration termination condition can be carried out setting by those skilled in the art, for example if correction x reaches specified value less than given threshold value or iterations, termination of iterations output model coefficient X '; Otherwise, utilize 3) and model coefficient X ' that gained is new returns 1) repeat above-mentioned 1)~4) step.
By step (1) (2) (3), definite and solution has been calculated the model coefficient X of geometric model M, thereby has realized that (B, L is H) to virtual representation side's coordinate approximate value (sample, inverse vline) (referring to accompanying drawing 4) from object coordinates.
As Fig. 5 and Fig. 7, based on above design, embodiment realizes that the iteration inverse process from object coordinates P0 to raw video coordinate p may further comprise the steps:
Suppose the object coordinates when Pi represents that the i time iteration begins, P0(B0, L0 H0) is the object coordinates actual value of outside input, p(sample i, line i) be the raw video coordinate figure p(sample of the i time iteration output, line).Iterative process following (referring to accompanying drawing 5):
Step 1, according to geometric model M, from object coordinates Pi(Bi, Li, H0) (i=0,1,2 ...) solving virtual image coordinate (sample i, vline i); For the first time during execution in step 1, i=0; This step specific implementation can be referring to process (three).
Among the embodiment, from object coordinates Pi(Bi, Li, H0) (i=0,1,2 ...) initial value (sample of solving virtual image coordinate i, vline i) adopt formula (5) to realize, wherein the process of resolving of geometric model coefficient only need be carried out one time in the formula (5), can carry out in advance, directly adopts gained geometric model M to get final product when carrying out the iteration inverse.
The coefficient of this concrete geometric model M resolves process and comprises and carry out following operation:
Earlier virtual three-dimensional graticule mesh reference mark is divided and calculated to the three-dimensional graticule mesh of virtual image division rule; Determine the form of the geometric model M of coordinate from object coordinates to virtual representation side, given model coefficient X=[a 0a 1a 2B 0b 1b 2] TInitial value, carry out iteration according to the following step,
1) according to the least square iteration theorem, utilize virtual three-dimensional graticule mesh reference mark to list linearizing error equation v=Ax-l;
2) with least square method solving method equation, obtain the correction x=(A of model coefficient TA) -1(A TL);
3) upgrade model coefficient X '=X+x;
4) judging whether to satisfy iterated conditional, is termination of iterations output model coefficient X ' then; Otherwise, utilize 3) and model coefficient X ' that gained is new returns 1) iteration carries out.Step 2 is according to dummy row vline in the step 1 gained virtual image coordinate i, calculate raw video row line i
Step 2, specific implementation can be referring to process (one), namely earlier according to the relation between dummy row number and the imaging moment, obtain dummy row vline iCorresponding imaging is t constantly Vline=start_time+vline i* dt, start_time are the imaging moment of first trip image; Obtain the constantly corresponding raw video row line of imaging according to capable number of raw video with the relation between the imaging constantly again iLine.
Step 3 is according to raw video coordinate p(sample i, line i) and object coordinates actual value P0 in object space elevation H0, calculate object coordinates P'(B', L'); The object space elevation adopts H0 constant always.Raw video coordinate p(sample i, line i) in, sample i, constant, line iAdopt the result of calculation of this iteration execution in step 2.
Specific implementation is existing algorithm, can be referring to process (two).
Step 4, the object coordinates P'(B' that comparison step 3 calculates, L') with object coordinates actual value P0 in (B0, L0), judge whether to satisfy the iteration termination condition, the iteration termination condition can be carried out setting by those skilled in the art, for example can adopt those skilled in the art rule of thumb or the judgement of experiment preset threshold value.If difference is less than or equal to then finishing iteration of threshold value, export current raw video coordinate p(sample i, line i); Otherwise, make i=i+1, calculate new object coordinates according to following formula then:
Pi=P0×2-P',
Be Li=L0 * 2-L' and Bi=B0 * 2-B' and turn back to step 1, continue iteration.
In Fig. 5, find the solution according to P0 and to obtain (sample 1, vline 1), obtain P' then, calculate P1=P0 * 2-P' again, continue then to find the solution according to P1 and obtain (sample 2, vline 2) ... so iteration is up to satisfying condition.Just can realize that by above step the TDICCD linear array push of line frequency saltus step sweeps imaging sensor and calculate high-precision image coordinate from object coordinates, this method is simply quick, can realize as long as improve a little on the basis that collinearity equation model coordinate is just being calculated.The characteristics of this method are: from object coordinates inverse raw video coordinate initial value accurate, iterations is few, the raw video coordinate precision height of output, perfection has solved the coordinate inverse problem that line frequency has the TDICCD of saltus step to push away to sweep image.And the present invention has just used the concept (and not needing to export real virtual image) of virtual image in the coordinate Calculation process, this cleverly way also be the marrow of the inventive method.
Specific embodiment described herein only is that the present invention's spirit is illustrated.Those skilled in the art can make various modifications or replenish or adopt similar mode to substitute described specific embodiment, but can't depart from spirit of the present invention or surmount the defined scope of appended claims.

Claims (2)

1. one kind is used for the coordinate reverse calculation algorithms that the TDICCD linear array push is swept sensor, it is characterized in that: carry out raw video coordinate inverse based on virtual image, described virtual image be one capable integral time invariable image, in fixing a period of time, row integral time and the image line number relation of being inversely proportional to; The inverse implementation is as follows,
The object coordinates when if Pi represents that the i time iteration begins, wherein during i=0, P0(B0, L0 is the object coordinates actual value of outside input H0), (B0 L0) is geographical latitude and longitude coordinates, and H0 is the object space elevation; P(sample i, line i) be the raw video coordinate figure of the i time iteration output, sample iBe raw video row number, line iBe capable number of raw video; Initialization i=0, it is as follows to carry out iterative process,
Step 1, according to geometric model M, from object coordinates Pi(Bi, Li, H0) inverse obtains virtual image coordinate (sample i, vline i);
The form of described geometric model M is three-dimensional polynomial expression or rational polynominal, and the coefficient of geometric model M obtains by least square fitting according to the virtual three-dimensional graticule mesh reference mark coordinate of virtual image;
Step 2 is according to the row vline in the virtual image coordinate iCalculate raw video row line i, implementation is as follows,
Virtual image is identical with the row, column number of raw video, and then capable integral time of the dt of virtual image equals the assembly of raw video as the total line number of time divided by raw video; Earlier according to the relation between dummy row number and the imaging moment, obtain dummy row vline iCorresponding imaging is t constantly Vline=start_time+vline i* dt, start_time are the imaging moment of first trip image; Obtain the constantly corresponding raw video row line of imaging according to capable number of raw video with the relation between the imaging constantly again i
Step 3 is according to raw video coordinate p(sample i, line i) and object coordinates actual value P0 in object space elevation H0 calculate object coordinates P'(B', L');
Step 4, the object coordinates P'(B' that comparison step 3 calculates, L') Yu among the object coordinates actual value P0 (B0 L0), judges whether to satisfy the iteration termination condition, is then to export raw video coordinate p(sample i, line i); Otherwise, make i=i+1, calculate new object coordinates Pi=P0 * 2-P' then, and turn back to step 1 continuation iteration.
2. sweep the coordinate reverse calculation algorithms of sensor according to claim 1 is described for the TDICCD linear array push, it is characterized in that: obtain the coefficient of geometric model M according to the virtual three-dimensional graticule mesh reference mark coordinate of virtual image by least square fitting, implementation is as follows,
Virtual three-dimensional graticule mesh reference mark is divided and calculated to the three-dimensional graticule mesh of virtual image division rule; Determine the form of the geometric model M of coordinate from object coordinates to virtual representation side, the initial value of given model coefficient X carries out iteration according to the following step,
1) according to the least square iteration theorem, utilize virtual three-dimensional graticule mesh reference mark to list linearizing error equation v=Ax-l, wherein x is the correction vector of model coefficient, and A is the matrix of coefficients of x, and l is virtual representation side's error of coordinate vector;
2) find the solution the normal equation of error equation with least square method, obtain the correction x=(A of model coefficient TA) -1(A TL);
3) upgrade model coefficient X '=X+x;
4) judging whether to satisfy the iteration termination condition, is termination of iterations output model coefficient X ' then; Otherwise, utilize 3) and model coefficient X ' that gained is new returns 1) iteration carries out.
CN201310201971.8A 2013-05-27 2013-05-27 Coordinate back calculation method for TDICCD (time delay and integration charge coupled devices) linear array push-sweep sensor Expired - Fee Related CN103278140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310201971.8A CN103278140B (en) 2013-05-27 2013-05-27 Coordinate back calculation method for TDICCD (time delay and integration charge coupled devices) linear array push-sweep sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310201971.8A CN103278140B (en) 2013-05-27 2013-05-27 Coordinate back calculation method for TDICCD (time delay and integration charge coupled devices) linear array push-sweep sensor

Publications (2)

Publication Number Publication Date
CN103278140A true CN103278140A (en) 2013-09-04
CN103278140B CN103278140B (en) 2015-07-15

Family

ID=49060729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310201971.8A Expired - Fee Related CN103278140B (en) 2013-05-27 2013-05-27 Coordinate back calculation method for TDICCD (time delay and integration charge coupled devices) linear array push-sweep sensor

Country Status (1)

Country Link
CN (1) CN103278140B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778610A (en) * 2014-01-24 2014-05-07 武汉大学 Geometric pretreatment method for vertical rail swing images of satellite-borne linear array sensor
CN106559665A (en) * 2016-10-20 2017-04-05 北京空间飞行器总体设计部 A kind of off-axis camera integration time determines method
CN106896390A (en) * 2017-02-23 2017-06-27 苏州建设监理有限公司 The coordinate measuring method of building compact district hidden point
CN111324857A (en) * 2020-03-19 2020-06-23 武汉大学 Quick inverse transformation calculation method based on TDICCD push-broom characteristic
CN111369453A (en) * 2020-02-26 2020-07-03 武汉大学 Image rapid geometric preprocessing method based on average elevation surface

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101827223A (en) * 2010-04-20 2010-09-08 武汉大学 Inner field stitching method of non-collinear TDI CCD imaging data based on line frequency normalization

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101827223A (en) * 2010-04-20 2010-09-08 武汉大学 Inner field stitching method of non-collinear TDI CCD imaging data based on line frequency normalization

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAQI WANG ET. AL: "Space optical remote sensor image motion velocity vector computational modeling, error budget and synthesis", 《CHINESE OPTICS LETTERS》 *
葛苹 等: "高分辨率TDI-CCD成像数据的自适应MTF图像复原处理研究", 《国土资源遥感》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778610A (en) * 2014-01-24 2014-05-07 武汉大学 Geometric pretreatment method for vertical rail swing images of satellite-borne linear array sensor
CN103778610B (en) * 2014-01-24 2016-09-14 武汉大学 A kind of spaceborne line array sensor hangs down the geometry preprocess method of rail sweeping image
CN106559665A (en) * 2016-10-20 2017-04-05 北京空间飞行器总体设计部 A kind of off-axis camera integration time determines method
CN106559665B (en) * 2016-10-20 2018-02-09 北京空间飞行器总体设计部 A kind of off-axis camera integration time determines method
CN106896390A (en) * 2017-02-23 2017-06-27 苏州建设监理有限公司 The coordinate measuring method of building compact district hidden point
CN111369453A (en) * 2020-02-26 2020-07-03 武汉大学 Image rapid geometric preprocessing method based on average elevation surface
CN111324857A (en) * 2020-03-19 2020-06-23 武汉大学 Quick inverse transformation calculation method based on TDICCD push-broom characteristic

Also Published As

Publication number Publication date
CN103278140B (en) 2015-07-15

Similar Documents

Publication Publication Date Title
CN106403902B (en) A kind of optical satellite in-orbit real-time geometry location method and system cooperateed with to star
CN103278140B (en) Coordinate back calculation method for TDICCD (time delay and integration charge coupled devices) linear array push-sweep sensor
US9654706B2 (en) Method for reducing image fuzzy degree of TDI-CCD camera
CN104897175B (en) Polyphaser optics, which is pushed away, sweeps the in-orbit geometric calibration method and system of satellite
CN104392435A (en) Fisheye camera calibration method and device
CN103697864B (en) A kind of narrow visual field double camera image splicing method based on large virtual camera
CN102901519B (en) optical push-broom satellite in-orbit stepwise geometric calibration method based on probe element direction angle
CN105528500A (en) Imaging simulation method and system for decimeter-scale satellite-borne TDI CCD stereoscopic mapping camera
CN102519433B (en) Method for inverting geometric calibrating parameter of satellite-borne linear array sensor by using RPC (Remote Position Control)
CN111508029A (en) Satellite-borne segmented linear array CCD optical camera overall geometric calibration method and system
CN103150724B (en) Segmented model-based camera calibration method
CN103673995A (en) Calibration method of on-orbit optical distortion parameters of linear array push-broom camera
CN102314674B (en) Registering method for data texture image of ground laser radar
CN102135435B (en) Error correction method and device for digital sun sensor
CN103914808A (en) Method for splicing ZY3 satellite three-line-scanner image and multispectral image
CN105513018A (en) Geometric correction method and apparatus for spaceborne whisk-broom imaging
CN111724465A (en) Satellite image adjustment method and device based on plane constraint optimal selection virtual control point
CN101598785B (en) Method for generating rational function imaging model of each grade satellite image products
CN107040695A (en) Spaceborne video image stabilization method and system based on RPC location models
CN112270698A (en) Non-rigid geometric registration method based on nearest curved surface
CN102322863B (en) Remote sensing satellite multi-satellite combined converse orbit and attitude determination method
CN104820984A (en) Satellite remote sensing stereo image processing system and method
CN103778610A (en) Geometric pretreatment method for vertical rail swing images of satellite-borne linear array sensor
CN110660099B (en) Rational function model fitting method for remote sensing image processing based on neural network
CN105004321A (en) Unmanned plane GPS-supported bundle djustment method in consideration of non-synchronous exposal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150715

Termination date: 20170527