CN109741381B - Satellite-borne push-broom optical sensor high-frequency error elimination method based on parallel observation - Google Patents

Satellite-borne push-broom optical sensor high-frequency error elimination method based on parallel observation Download PDF

Info

Publication number
CN109741381B
CN109741381B CN201910064610.0A CN201910064610A CN109741381B CN 109741381 B CN109741381 B CN 109741381B CN 201910064610 A CN201910064610 A CN 201910064610A CN 109741381 B CN109741381 B CN 109741381B
Authority
CN
China
Prior art keywords
error
attitude
frequency
point
ccd1
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910064610.0A
Other languages
Chinese (zh)
Other versions
CN109741381A (en
Inventor
孙向东
张过
蒋永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910064610.0A priority Critical patent/CN109741381B/en
Publication of CN109741381A publication Critical patent/CN109741381A/en
Application granted granted Critical
Publication of CN109741381B publication Critical patent/CN109741381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a parallel observation-based satellite-borne push-broom optical sensor high-frequency error detection method, and belongs to the field of aerospace. The method enables a mainstream imaging device of the current high-resolution geostationary optical satellite to be analyzed through analyzing a TDICCD device, analyzes and obtains that the homonymous intersection error in parallel observation is probably caused by a high-frequency error and an internal orientation element error in imaging geometric parameters on the premise of acquiring the elevation by utilizing global SRTM-DEM data, provides that the internal orientation element error belongs to a stable system error, does not change or does not change remarkably in a period of time, and can be eliminated through on-orbit calibration. By extracting homonymous points in parallel observation, the intersection error of the homonymous points is calculated, high-precision on-orbit calibration is realized, a geometric positioning model is established after azimuth elements in a camera are accurately restored, and the intersection error of the homonymous points is calculated under the assistance of an SRTM-DEM. The frequency spectrum analysis is carried out on the intersection error, and information such as frequency, amplitude and the like of the high-frequency error can be detected.

Description

Satellite-borne push-broom optical sensor high-frequency error elimination method based on parallel observation
Technical Field
The invention relates to a parallel observation-based satellite-borne push-broom optical sensor high-frequency error detection method, and belongs to the field of aerospace.
Background
Under the promotion of high-grade special items started and implemented in 2010, a sky-drawing series satellite, a resource three-number satellite and a high-grade series satellite are transmitted successively, and China obtains serial breakthroughs and favorable achievements in the field of aerospace remote sensing. However, today, a large amount of data of resource series and remote sensing series satellites in China still cannot be well applied due to the fact that the data do not have the systematicness of positioning errors, and therefore accumulation waste is caused. The method ensures the application effect of domestic satellite data, and is the key for really solving autonomy of domestic remote sensing data.
At present, foreign research objects only aim at high-frequency jitter of a platform, and high-frequency errors are recovered from registration errors under the condition of smoothness assumption of jitter or sine and cosine waveform description, so that the method is difficult to be suitable for eliminating the high-frequency errors of the active in-orbit satellites in China. In the existing domestic research, the characteristics of the existing optical satellite platform are not fully considered, the complete identification of a high-frequency error source is lacked, and the existing optical satellite platform is difficult to be really applied to the existing domestic satellite to improve the geometric quality of the existing domestic satellite. The research on the detection and elimination of the high-frequency error has important significance for improving the geometric quality of the optical satellite.
Disclosure of Invention
The invention aims to solve the problem that the existing high-frequency error detection method is low in precision, and provides a parallel observation-based satellite-borne push-broom optical sensor high-frequency error detection method.
The purpose of the invention is realized by the following technical scheme.
A satellite-borne push-broom optical sensor high-frequency error detection method based on parallel observation enables a main stream imaging device of a current high-resolution earth optical satellite to be analyzed through a TDICCD device, under the premise that the global SRTM-DEM data is used for obtaining the elevation, the homonymy intersection error in the parallel observation is probably caused by a high-frequency error and an internal orientation element error in imaging geometric parameters, the internal orientation element error is provided to be a stable system error, the change cannot occur or is not obvious in change within a period of time, and the error can be eliminated through on-orbit calibration. By extracting homonymous points in parallel observation, the intersection error of the homonymous points is calculated, high-precision on-orbit calibration is realized, a geometric positioning model is established after azimuth elements in a camera are accurately restored, and the intersection error of the homonymous points is calculated under the assistance of an SRTM-DEM. The frequency spectrum analysis is carried out on the intersection error, and information such as frequency, amplitude and the like of the high-frequency error can be detected.
The satellite-borne push-broom optical sensor high-frequency error detection method based on parallel observation comprises the following steps:
step 1, registering characteristic points of a CCD overlapping area;
calculating overlapped pixels of adjacent CCD linear arrays by adopting a geometric positioning model, taking the number of the overlapped pixels as a maximum window for searching the same-name points, and ensuring that the same-name points are obtained for each line of images as much as possible so as to detect errors with higher frequency; in the matching process, pixel-level registration point positions are determined based on correlation coefficient measurement, and sub-pixel-level point positions are further obtained based on least square matching. Given a point (x) on the linear array of CCD11,y1) The method for obtaining the same name point (sub-pixel level point) is as follows:
step 1.1, based on a geometric positioning model, calculating the number overlap of overlapped pixels of a CCD1 linear array and a CCD2 linear array, and taking the number overlap of overlapped pixels as a maximum search window of correlation coefficient registration;
step 1.2, based on the geometric positioning model, calculating (x)1,y1) Corresponding ground object coordinates (X, Y, Z) and calculating the corresponding pixel coordinates (X, Y, Z) on the linear array of the CCD2
Figure BDA0001955265330000021
Step 1.3 to
Figure BDA0001955265330000022
Centered on the overlapping pixel number overlap as the maximum searchWindow of cable, point by point, and (x) according to the following formula1,y1) Correlation coefficient of the centered domain; and taking the maximum correlation coefficient point location (x'2,y'2) As pixel-level registration points;
Figure BDA0001955265330000023
in the formula: i. j is the number of rows and columns of the pixel respectively; (c, r) is the coordinate of the center of the search area; w and h respectively represent the width and height of a correlation coefficient calculation window; g. g' respectively represents the image gray scales of the CCD1 linear array and the CCD2 linear array;
Figure BDA0001955265330000024
respectively representing the average value of the image gray scales of the CCD1 linear array and the CCD2 linear array;
Figure BDA0001955265330000025
is (x)1+r,y1+ c) the gray level of the point;
Figure BDA0001955265330000026
is (x)2+r,y2+ c) the gray level of the point.
Step 1.4 Point location (x)1,y1)、(x′2,y′2) As an initial value, performing least square matching according to the formula (2) to obtain a sub-pixel level registration point (x)2,y2) Completing the registration;
Figure BDA0001955265330000031
step 2, performing gross error detection on homonymous points, and removing mismatching point pairs;
due to the fact that radiation difference exists on each linear array CCD and the reason that part of the area is lack of texture features, mismatching is almost inevitable, and therefore high-frequency error detection and elimination are greatly affected. Therefore, the pairs of mismatching points need to be eliminated.
And establishing a relation coordinate system by taking the relative position relation of the same-name point pairs as time according to the abscissa and taking the ordinate as a pixel. And performing median filtering on the original relative relationship of the same-name points, comparing the original relative relationship of the same-name points with the median filtering result, and if the difference between the original relative relationship of the same-name points and the median filtering result is greater than a preset threshold, determining that the point is a mismatching point, and rejecting the point.
Step 3, calculating a high-frequency error;
after the error of the internal orientation element of the camera is eliminated through on-orbit geometric calibration, the intersection error of the homonymous points in parallel observation is the concrete expression of the high-frequency error. Defining pairs of parallel observation homonymous points
Figure BDA0001955265330000036
Respectively form images on t0Time t and1at that time, the homonymous point intersection error is expressed as:
Figure BDA0001955265330000032
in the formula, (DeltaX DeltaY DeltaZ)TIs a positioning error.
Figure BDA0001955265330000033
Is t0Positioning error of time;
Figure BDA0001955265330000034
is t1Positioning error of time; obviously, if t0Time t and1the different errors on the time and the star result in unequal positioning errors, that is to say
Figure BDA0001955265330000035
Thereby producing a homonymous point crossing error. Through the analysis of the formula, it can be seen that the high-frequency error of the satellite is the high-frequency attitude error, and based on the compensation principle of the bias matrix to the attitude error, an attitude compensation matrix is introduced into the formula (3):
Figure BDA0001955265330000041
in the formula, (DeltaX DeltaY DeltaZ)TIs a positioning error.
Figure BDA0001955265330000042
Defining an attitude compensation matrix corresponding to the time t as follows:
Figure BDA0001955265330000043
wherein the content of the first and second substances,
Figure BDA0001955265330000044
ωt、κtangles are compensated for the pose to be solved.
Figure BDA0001955265330000045
Figure BDA0001955265330000046
Figure BDA0001955265330000047
A relative attitude concept is introduced, and the influence of high-frequency attitude errors on the geometric positioning model is eliminated by solving the relative attitude.
Let A1 represent the true attitude during satellite imaging and A0 represent the attitude of the satellite up-and-down. If at the previous time t0By solving for the subsequent time t only, with reference to the positioning model of (2)1Relative to t0The attitude compensation matrix of (2) eliminates the intersection error of the homonymous points, and the actually solved and recovered attitude is A2; by contrast, the process of recovering the posture A2 can recover the relative relationship between the postures at different times. When geometric processing is performed by using the A0 attitude, the attitude error is a random error; when geometric processing is performed by using the a2 pose, the pose error is a systematic error, and the a2 has equivalence to the true pose a1 with respect to the relative positioning accuracy of the image. Therefore, by recovering a2, not only the solution can be simplified, but also the high frequency attitude error can be solved.
When not consideringTime factor: according to the same name point
Figure BDA0001955265330000051
Calculation based on geometric positioning model and global SRTM-DEM
Figure BDA0001955265330000052
Obtaining a control point according to the corresponding ground object coordinates; when in use
Figure BDA0001955265330000053
If the number of control points on the row is more than or equal to 2, the constant bias matrix method can be used for solving t1Attitude compensation matrix R corresponding to timeoffset(ii) a When in use
Figure BDA0001955265330000054
Number of control points on line<2, the solution cannot be obtained;
when considering the time factor: and (3) grouping all the homonymy points by taking the similarity degree of intersection errors of the homonymy points as a classification measure, solving the attitude compensation matrix group by group, and reducing the influence of mismatching. N pairs of same-name points (x) are obtained on CCD1 and CCD2ccd1,yccd1,xccd2,yccd2)iI is not more than N, and yccd1<yccd2Grouping all the points with the same name according to the similarity, and solving the attitude compensation matrix group by group according to the specific solving flow as follows:
(1) all the same-name point pairs are pressed into yccd1Arranging in an ascending order;
(2) calculating (x) based on geometric positioning model and SRTM-DEM dataccd2,yccd2)iCorresponding ground feature coordinates (X, Y, Z)i
(3) The coordinates (X ') of the image point corresponding to (X, Y, Z) i on the CCD1 are obtained based on the geometric positioning model'ccd1,y'ccd1)i
(4) Calculating the homonymous point intersection error of the CCD 1:
(Δx,Δy)i=(x'ccd1-xccd1,y'ccd1-yccd1)i,i≤N (6)
substituting the result of the formula (6) into the formula (3) to obtain the intersection error of the homonymous points of the CCD 1;
(5) comparing intersection errors of adjacent homonymous points according to the following formula, and if the difference value is within the range of a threshold value d, enabling the two pairs of homonymous points to be in the same group; otherwise, creating a new group;
Figure BDA0001955265330000055
(6) if the number of members in a certain group is less than a preset value, deleting the group;
(7) when a group contains m members, (x)ccd1,yccd1,xccd2,yccd2)iI is less than or equal to m, the method in the step 3 is utilized to solve the attitude compensation matrix R of the groupoffsetThe scope of the attitude compensation matrix is defined according to equation (8); traversing and solving the attitude compensation matrixes of all the groups to obtain the attitude compensation matrix of each group and the action domain corresponding to the attitude compensation matrix of each group;
min({yccd1}j,j≤m)=ymin≤y≤ymax=max({yccd1}j,j≤m) (8)
(8) traversing all the grouped attitude compensation matrixes and all action domains obtained in the step (7) for the attitude at the time t, and updating attitude data by using the attitude compensation matrixes at the time t contained in the action domains according to the following formula;
Figure BDA0001955265330000061
and if the scope does not contain the attitude compensation matrix at the time t, adopting the most adjacent front and rear groups of attitude compensation angles to obtain an updated attitude compensation matrix according to linear interpolation.
The updated attitude compensation matrix is substituted into the formula (4) to eliminate the high-frequency error.
And 4, evaluating the precision of the high-frequency error.
Carrying out high-frequency attitude correction on the original image to obtain a corrected image; and respectively carrying out road linearity verification and relative registration precision verification on the original image and the corrected image. And comparing and evaluating the verification result, wherein the evaluation method comprises the following steps:
method a) evaluating the road straightness by a road straightness evaluation method to evaluate vertical rail attitude jitter;
method b) evaluating the elimination effect of the shake along the axial posture by using a relative registration error evaluation method.
Evaluation method a):
due to the characteristic of linear array push-and-sweep instantaneous imaging, vertical rail high-frequency attitude jitter will cause the linear ground object distortion along the rail direction. And selecting a linear ground object along the direction of the rail as an evaluation target, and extracting the sub-pixel coordinates of the edge of the linear ground object by using a Sobel operator. And finally, in order to check the correction effect of the vertical rail high-frequency attitude shake, linear fitting is carried out on the edge coordinates of the linear ground object, and the straightness of the road edges of the images before correction and the images after correction is evaluated through fitting residual errors.
Evaluation method b):
since the DEM data selected in this example is derived from 30m ground elevation data for an SRTM-DEM, the elevation accuracy is not high. To avoid the effect of elevation accuracy, the intersection angle between the two scenes matching each other should be as small as possible. When the intersection angle is less than or equal to 2 degrees, the influence of the elevation error on the intersection along the track is negligible; when the intersection angle is larger than 2 degrees, COSI-Corr software is adopted to carry out high-precision matching and calculate the residue, and a relatively obvious digital indication model (DSM) characteristic is shown in the vertical rail direction; when the intersection angle is more than 2 degrees, the evaluation method for eliminating the high-frequency error in the along-track direction is as follows:
the basic flow of verification of the above evaluation results is as follows:
1) selecting original images A and B, wherein the line number of each image is not less than 1000, bringing the posture parameters after high-frequency correction into a geometric model, and producing the images to obtain image data.
2) The original video a and B are matched by a high-precision matching algorithm to generate the same-name points (xbk, ybk) of each pixel (xak, yak) (k is 1, 2, … n) of the video a and the corresponding video B.
3) And (4) according to the intersection of the geometric model of the image A and the SRTM-DEM, calculating the ground object coordinates (Latk, Lonk, Hk) of each pixel point (xak, yak). Then, the geometric model of the image B is intersected with the SRTM-DEM, and the ground object coordinates (Latk, Lonk, Hk) corresponding to the pixel coordinates (xmk, ymk) of the image B are calculated, so that the positioning residual error of the pixel is (xbk-xmk, ybk-ymk). According to the pixel coordinate position of each pixel point (xak, yak), a relative residual error graph along the direction of the track is drawn after high-frequency correction.
4) And performing the same treatment on the posture before high-frequency correction according to the method of the steps 1) to 3), and drawing an axial relative residual map before high-frequency correction.
5) Comparing the two residual images in the step 3) and the step 4), namely evaluating the elimination of the high-frequency error;
has the advantages that:
1. the invention adopts the method of synchronous imaging of the virtual camera to eliminate the image distortion caused by high-frequency error and the like in the image, is more effective in eliminating the distortion compared with other prior art, and can obtain the distortion-free image.
2. The invention can improve the internal precision of the image, and the less control orientation precision reaches about 1 pixel magnitude, which is equivalent to the control precision.
3. The elimination of the mismatching points is realized by adopting a median filtering method, the equivalence of the relative attitude and the real attitude is analyzed, and the high-frequency attitude error is recovered from the intersection error of the homonymous points by adopting a method for solving the relative attitude, so that the method is more accurate and has higher efficiency compared with the prior art error elimination methods.
Drawings
FIG. 1 is a schematic diagram of mismatch culling based on motion smoothing;
FIG. 2 is a schematic diagram of relative attitude;
FIG. 3 is a flow chart of evaluation of high frequency attitude correction effects;
FIG. 4 is a schematic diagram of road edge extraction wherein: (a) original image, (b) road edge feature, (c) a complete road edge;
FIG. 5 is a plot of the linear fit residuals of the Lijiang scene road edge coordinates; wherein: (a) is a schematic diagram before correction, (b) is a schematic diagram after correction;
FIG. 6 is a plot of Lijiang scene road edge coordinate linear fitting residual spectrum analysis;
fig. 7 is a schematic diagram of registration error of camera CCD 3; wherein (a) is an along-track registration error diagram, and (b) is a vertical-track registration error diagram;
FIG. 8 resource number one 02C homonymous point intersection error comparison graph; wherein (a) is a schematic view of Henan Jing; (b) is a schematic view of a Taihang mountain scene; (c) is a schematic diagram of an inner scenery; (d) is a diagram of a Taiyuan scene.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Example 1:
the following data were collected:
through the analysis of the resource No. 02C platform and the downloaded data, the factors causing high-frequency errors on the satellite comprise: 1) and (3) attitude quantization error: the storage unit of the attitude on the satellite is 0.0055 degrees, and only the attitude data of integral multiple of 0.0055 degrees can be accurately stored; therefore, the method can not record the gentle change of the attitude data from 0-0.0055 degrees, so that the descending attitude presents step change, and the internal precision of the image is reduced; 2) shaking a platform: the attitude output frequency is 0.25Hz, and the platform stability can only be maintained at 0.001 degrees/s;
in order to perform high-frequency error elimination and verification experiments on the resource No. 02C image, resource No. 02C HR images of a Henan area, a Taihang mountain area, an inner Mongolia area and a Taiyuan area are collected. The specific information of the image data is as follows:
Figure BDA0001955265330000091
the invention discloses a satellite-borne push-broom optical sensor high-frequency error detection method based on an angular displacement sensor, which comprises the following steps:
step 1, registering the homonymous points of a CCD overlapping area;
step 2, gross error detection of homologous points;
step 3, calculating a high-frequency error;
and 4, evaluating the precision of the high-frequency error.
1. The step 1 comprises the following steps:
considering that the number of overlapped pixels of the adjacent CCD linear arrays is small, a geometric positioning model can be adopted to calculate the overlapped pixels of the adjacent CCD linear arrays, the overlapped size is used as a maximum window for searching the same-name points, and the same-name points are obtained for each line of images as far as possible so as to detect errors with higher frequency; the matching process may determine pixel-level registration point locations based on correlation coefficient measures, further based on least squares matching to obtain sub-pixel-level point locations. Suppose a given point (x) on the linear array of CCD1 is1,y1) The homonymy point acquisition process comprises the following steps:
(1) based on a geometric positioning model, calculating the number of overlapped pixels overlap of the CCD1 linear array and the CCD2 linear array, and taking the number of overlapped pixels overlap as a maximum search window of correlation coefficient registration;
(2) based on a geometric positioning model, calculating (x)1,y1) Corresponding ground object coordinates (X, Y, Z) and calculating the corresponding pixel coordinates (X, Y, Z) on the linear array of the CCD2
Figure BDA0001955265330000101
(3) To be provided with
Figure BDA0001955265330000102
Centered on overlap, the sum (x) is calculated point by point according to the following formula1,y1) Correlation coefficient of the centered domain; and taking the maximum correlation coefficient point location (x'2,y'2) As pixel level registration points;
Figure BDA0001955265330000103
in the formula: g. g' respectively represents the image gray scales of the CCD1 linear array and the CCD2 linear array; w and h respectively represent the width and height of a correlation coefficient calculation window;
(4) in point location (x)1,y1)、(x'2,y'2) As an initial value, performing least square matching according to the following formula to obtain a sub-pixel level registration point (x)2,y2);
Figure BDA0001955265330000104
The four areas of Henan, Taihang mountain, inner Mongolia and Taiyuan respectively comprise 38, 28, 13 and 45 high-precision control points; wherein, the Henan and Taiyuan area control points are respectively obtained manually from 1:2000 and 1:5000 orthoimages and a digital elevation model; and the control points of the inner Mongolia and Taihang mountain areas are high-precision field GPS control points, the measurement precision of object space coordinates is better than 0.1m, the image space coordinates are selected manually, and the precision is better than 1.5 pixels.
2. The step 2 comprises the following steps:
due to the radiation difference on each linear array CCD, lack of texture characteristics (such as water area) in partial area and the like, mismatching is almost inevitable, and therefore high-frequency error detection and elimination are greatly affected. Therefore, the pairs of mismatching points need to be eliminated.
FIG. 1 is a schematic diagram of the method for eliminating mismatching, in which the blue line is the same name point pair (x)1,y1,x2,y2) Relative positional relationship (Δ x, Δ y) ═ x2-x1,y2-y1) The red line is the result of median filtering the blue line. And if the difference between the original relative relation of the same-name points and the filtering result is greater than a certain threshold value, determining the point is a mismatching point.
Setting a point rejecting threshold value extracted from the parallel observation homonymous points to be 1.5 pixels and a grouping threshold value to be 0.3 pixels, and finally obtaining pairs of parallel observation homonymous points 53610, 56361, 79942 and 202428 on Henan scenery, Taihang mountain scenery, inner scenery and Taiyuan scenery images.
3. The step 3 comprises the following steps:
after the error of the orientation element in the camera is eliminated through on-orbit geometric calibration, the intersection error of the homonymous points in parallel observation is the concrete expression of the high-frequency error. Suppose parallel observation of same-name point pairs
Figure BDA0001955265330000111
Respectively form images on t0Time t and1at the moment, the intersection errorCan be expressed as
Figure BDA0001955265330000112
In the formula, (DeltaX DeltaY DeltaZ)TIs a positioning error. Obviously, if t0Time t and1the different errors on the time and the star result in unequal positioning errors, that is to say
Figure BDA0001955265330000113
Thereby generating a rendezvous error. The high-frequency error of the domestic satellite can be only considered as a high-frequency attitude error, and based on the principle of compensating the attitude error by the bias matrix, an attitude compensation matrix can be introduced into the formula (3):
Figure BDA0001955265330000114
in the formula (I), the compound is shown in the specification,
Figure BDA0001955265330000115
for the attitude compensation matrix corresponding to time t, the following can be defined with reference to the offset matrix:
Figure BDA0001955265330000116
wherein the content of the first and second substances,
Figure BDA0001955265330000117
ωt、κtangles are compensated for the pose to be solved.
The invention introduces a relative attitude concept, and eliminates the influence of high-frequency errors on the geometric positioning model by solving the relative attitude.
Assume that A1 represents the true attitude during satellite imaging and A0 represents the attitude of the satellite up-and-down. If at the previous time t0By solving for the subsequent time t only, with reference to the positioning model of (2)1Relative to t0The intersection error is eliminated by the attitude compensation matrix, and the actually solved and recovered attitude is a2 of fig. 2; as can be seen, the process can recover to different timesRelative relationships between poses. When geometric processing is performed by using the A0 attitude, the attitude error is a random error; when geometric processing is performed by using the a2 pose, the pose error is a systematic error, and the a2 has equivalence to the true pose a1 with respect to the relative positioning accuracy of the image. Therefore, by recovering a2, not only the solution can be simplified, but also the high frequency attitude error can be solved.
According to the above analysis, the same-name points are compared
Figure BDA0001955265330000121
Suppose that
Figure BDA0001955265330000122
Based on the geometric positioning model and the global SRTM-DEM calculation
Figure BDA0001955265330000123
Corresponding ground coordinates (X, Y, Z) to obtain control points
Figure BDA0001955265330000124
When in use
Figure BDA0001955265330000125
If the number of control points on the row is more than or equal to 2, the constant bias matrix method can be used for solving t1And (4) attitude compensation matrixes corresponding to the moments.
The high-frequency error of the domestic satellite platform has more generation factors, and the high-frequency attitude error is difficult to describe by establishing a strict model; due to the lack of model constraint, the solution process of the high-frequency attitude error is susceptible to mismatching. Therefore, the similarity degree of intersection errors of the homonymous points is used as a classification measure, all homonymous points are grouped, the attitude compensation matrix is solved group by group, and the influence of mismatching is reduced.
Assume that N pairs of identical-name pairs (x) are obtained on CCD1 and CCD2ccd1,yccd1,xccd2,yccd2)iI is not more than N, and yccd1<yccd2The specific solving process is as follows:
(1) all the same-name point pairs are pressed into yccd1Arranging in an ascending order;
(2) calculating (x) based on geometric positioning model and SRTM-DEM dataccd2,yccd2)iCorresponding ground coordinates (X, Y, Z)i
(3) The coordinates (X ') of the image point corresponding to (X, Y, Z) i on the CCD1 are obtained based on the geometric positioning model'ccd1,y'ccd1)i
(4) Calculating the intersection error of the homonymous points:
(Δx,Δy)i=(x'ccd1-xccd1,y'ccd1-yccd1)ii is less than or equal to N type (6)
(5) Comparing intersection errors of adjacent homonymous points according to the following formula, and if the difference value is within the range of a threshold value d, enabling the two pairs of homonymous points to be in the same group; otherwise, creating a new group;
Figure BDA0001955265330000131
(6) if the number of members in a group is too small, e.g., less than 5, the group is deleted;
(7) assuming that a group contains m members, (x)ccd1,yccd1,xccd2,yccd2)iI is less than or equal to m, the attitude compensation matrix R of the group is solved by the methodoffsetThe scope of the matrix is defined by equation (8); traversing and solving attitude compensation matrixes of all the groups;
min({yccd1}j,j≤m)=ymin≤y≤ymax=max({yccd1j, j is less than or equal to m) formula (8)
(8) Traversing all the groups for the posture at the time t, and updating posture data by utilizing a posture compensation matrix of which the action domain contains the time t according to the following formula; and if the scope does not contain the attitude compensation matrix at the time t, acquiring the attitude compensation angles of the two groups of the front and the back which are most adjacent according to linear interpolation.
Figure BDA0001955265330000132
Calculating and comparing the homonymous point intersection errors before and after the high-frequency error elimination according to the formula (8), and obtaining the following results:
Figure BDA0001955265330000133
as can be seen from the intersection error of the homonymous points before the high-frequency error is eliminated, the resource number one 02C geometric positioning model is affected by the high-frequency error, and the relative positioning accuracy is poor; since the high-frequency error shows randomness, the intersection error of the homonymous points of the high-frequency error changes from minimum 1 pixel (Taiyuan scene) to maximum about 13 pixels (Henan scene); after the high-frequency error is eliminated, the intersection error of the homonymy points is about 0.3 pixel, and the relative positioning precision is obviously improved.
Fig. 8 is a graph showing comparison of the intersection errors of the homonymous points, where the black line represents the intersection error of the homonymous points before the high-frequency error is eliminated, and the gray line represents the intersection error of the homonymous points after the high-frequency error is eliminated. As can be seen from the black line in the figure, the high frequency error of resource number one 02C is mainly affected by insufficient attitude quantization precision (0.0055 °) and platform jitter, and step changes caused by quantization errors can be seen in fig. 8(a) and 8(b) vertical errors; while FIG. 8(b) shows the along-track error and FIG. 8(d) shows the effect of land jitter. The high-frequency error of the resource No. 02C is complex, and a strict description model is difficult to establish, but the influence can be better eliminated by using the method provided in the chapter.
4. Step 4 comprises the following substeps:
theoretically, the experimental image and the high-precision reference image are matched, the effect of eliminating the shake before and after correction can be verified, but an effective homonymy point is difficult to accurately match actually. Therefore, the following two precision evaluation methods are adopted to verify the high-frequency correction effect, namely a) road straightness evaluation and b) relative registration error evaluation, and the evaluation is respectively carried out on the elimination effect of vertical rail attitude jitter and along rail attitude jitter. The evaluation flow chart is shown in fig. 3.
Evaluation method a):
due to the characteristic of linear array push-and-sweep instantaneous imaging, vertical rail high-frequency attitude jitter will cause the linear ground object distortion along the rail direction. Therefore, in the experiment, a straight ground object along the direction of the rail, such as a straight road, is selected as an evaluation target, and the sub-pixel coordinates of the road edge are extracted by using the Sobel operator. And finally, in order to check the correction effect of the vertical rail direction high-frequency attitude jitter, performing linear fitting on the road edge coordinates, and evaluating the straightness of the road edges of the images before and after correction through fitting residual errors.
Take a south-north road of Lijiang (FIG. 4a) as an example. The extracted road edge features are shown in fig. 4b through a Sobel operator, then a complete edge is selected, and edge coordinates are expressed in the form of sub-pixel values.
Fig. 5a and 5b show the results of the edge sub-pixel coordinate linear fitting residuals before and after high frequency correction for the lijiang road, respectively. Before image correction, there is a significant fluctuation in the extracted road edges. The linear fit root mean square error is 0.36 pixels and the fluctuation amplitude is about 0.9 pixels (maximum pixel offset due to jitter). After correction, the ripple amplitude is reduced to 0.4 pixels and the root mean square error is 0.17 pixels. It can also be known from the row number display of the abscissa that there are about 106 image rows in a fluctuation period, and the integration time of each row at the image imaging time is known to be 0.093ms, then the jitter frequency is known to be 1/(w × t) ═ 1/106 × 0.000093 ═ 101.4Hz according to the fluctuation characteristic of the residual error in fig. 5, which is substantially consistent with the results of the foregoing linear accelerometer measurement. By spectral analysis (fig. 6), the peak at 100Hz is greatly reduced after high frequency pose jitter correction.
Evaluation method b):
since the DEM data selected in this example is derived from 30m of ground elevation data for the SRTM, the elevation accuracy is not high. To avoid the effect of elevation accuracy, the intersection angle between the two scenes matching each other should be as small as possible. The angle difference of the pitch angles between every two Lijiang three-scene images is very small (less than 0.1 degrees), the influence of elevation errors on intersection along the track can be ignored, and the matched strips are relatively clear. And the side swing angle between each scene image is changed greatly (more than 2 degrees), and after COSI-Corr software is adopted to carry out high-precision matching and calculate the residue, the obvious digital display model (DSM) characteristic is shown in the vertical rail direction. Elevation errors are not negligible at this time, so the manner of pairwise registration errors is used herein only to evaluate the along-track correction effect.
The basic flow of the verification is as follows:
1) and selecting 1000 lines of the original images A and B respectively, substituting the attitude parameters after high-frequency correction into a geometric model, and producing the images to obtain image data.
2) The two videos are matched by a high-precision matching algorithm, and the pixels (xak, yak) (k is 1, 2, … n) of the video a and the corresponding homologous points (xbk, ybk) of the video B are generated.
3) The geometric model of the image A is intersected with the DEM, and the geographic coordinates (Latk, Lonk, Hk) of each pixel point (xak, yak) are calculated. Then, the geometric model of the image B is intersected with the DEM, the geographic coordinates (Latk, Lonk, Hk) corresponding to the coordinates (xmk, ymk) of each pixel of the image B are calculated, and the residual positioning error of the pixel is (xbk-xmk, ybk-ymk). And drawing an axial relative residual image of the high-frequency corrected track according to the coordinate position.
4) The same process is performed for the attitude before the high frequency correction. And drawing an axial relative residual error map of the high-frequency corrected front rail according to the coordinate position.
It can be seen that the directional residual error before the high-frequency error is eliminated is large in magnitude and random, and is difficult to compensate by a conventional processing model; and the directional residual error after eliminating the high-frequency error is small, and the directional precision is mainly influenced by the precision of the control point. By the method, the influence of high-frequency errors is eliminated without additional control data, the relative positioning precision is greatly improved, and the correctness of the method is verified.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (3)

1. The satellite-borne push-broom optical sensor high-frequency error detection method based on parallel observation is characterized by comprising the following steps: the method comprises the following steps:
step 1, registering characteristic points of a CCD overlapping area;
calculating overlapped pixels of adjacent CCD linear arrays by adopting a geometric positioning model, taking the number of the overlapped pixels as a maximum window for searching the same-name points, and ensuring that the same-name points are obtained for each line of images as much as possible so as to detect errors with higher frequency; in the matching process, pixel-level registration point positions are determined based on correlation coefficient measurement, and sub-pixel-level point positions are further obtained based on least square matching; given a point (x) on the linear array of CCD11,y1) The method for obtaining the same name point, namely the sub-pixel level point, comprises the following steps:
step 1.1, based on a geometric positioning model, calculating the number overlap of overlapped pixels of a CCD1 linear array and a CCD2 linear array, and taking the number overlap of overlapped pixels as a maximum search window of correlation coefficient registration;
step 1.2, based on the geometric positioning model, calculating (x)1,y1) Corresponding ground object coordinates (X, Y, Z) and calculating the corresponding pixel coordinates (X, Y, Z) on the linear array of the CCD2
Figure FDA0002275850840000011
Step 1.3 to
Figure FDA0002275850840000012
Using the number of overlapped pixels overlap as the maximum search window, and calculating the sum of (x) point by point according to the following formula1,y1) Correlation coefficient ρ of the centered domain; and taking the maximum correlation coefficient point location (x'2,y'2) As pixel-level registration points;
Figure FDA0002275850840000013
in the formula: i. j is the number of rows and columns of the pixel respectively; w and h respectively represent the width and height of a correlation coefficient calculation window; g. g' respectively represents the image gray scales of the CCD1 linear array and the CCD2 linear array;
Figure FDA0002275850840000014
respectively representing the average value of the image gray scales of the CCD1 linear array and the CCD2 linear array;
Figure FDA0002275850840000015
is (x)1+r,y1+ c) the gray level of the point;
Figure FDA0002275850840000016
is (x)2+r,y2+ c) the gray level of the point;
step 1.4 Point location (x)1,y1)、(x'2,y'2) As an initial value, performing least square matching according to the formula (2) to obtain a sub-pixel level registration point (x)2,y2) Completing the registration;
Figure FDA0002275850840000017
step 2, performing gross error detection on homonymous points, and removing mismatching point pairs;
due to the radiation difference on each linear array CCD and the lack of texture characteristic reasons in partial areas, mismatching is almost inevitable, so that great influence is caused on high-frequency error detection and elimination; therefore, the mismatching point pairs need to be eliminated;
establishing a relation coordinate system by taking the relative position relation of the same-name point pairs as time according to the abscissa and taking the ordinate as a pixel; performing median filtering on the original relative relationship of the same-name points, comparing the original relative relationship of the same-name points with the median filtering result, if the difference between the original relative relationship of the same-name points and the median filtering result is greater than a preset threshold, determining that the point is a mismatching point, and rejecting the point;
step 3, calculating a high-frequency error;
after the error of an internal orientation element of the camera is eliminated through on-orbit geometric calibration, the intersection error of the homonymous points in parallel observation is the concrete expression of a high-frequency error; defining pairs of parallel observation homonymous points
Figure FDA0002275850840000021
Respectively form images on t0Time t and1at that time, the homonymous point intersection error is expressed as:
Figure FDA0002275850840000022
in the formula (I), the compound is shown in the specification,
Figure FDA0002275850840000023
is t0Positioning error of time;
Figure FDA0002275850840000024
is t1Positioning error of time; obviously, if t0Time t and1the different errors on the time and the star result in unequal positioning errors, that is to say
Figure FDA0002275850840000025
Thereby generating homonymous point intersection errors; through the analysis of the formula, it can be seen that the high-frequency error of the satellite is the high-frequency attitude error, and based on the compensation principle of the bias matrix to the attitude error, an attitude compensation matrix is introduced into the formula (3):
Figure FDA0002275850840000031
in the formula (X)SYSZS)TIs a position vector of the GPS phase center under a WGS84 coordinate system, RuFor the bias matrix, m is the scaling factor,
Figure FDA0002275850840000032
is a transformation matrix of the J2000 coordinate system relative to the WGS84 coordinate system,
Figure FDA0002275850840000033
is a transformation matrix of the body coordinate system relative to the J2000 coordinate system,
Figure FDA0002275850840000034
is a transformation matrix of the camera coordinate system relative to the body coordinate system (DeltaX DeltaY DeltaZ)TIs a positioning error;
Figure FDA0002275850840000035
defining an attitude compensation matrix corresponding to the time t as follows:
Figure FDA0002275850840000036
wherein the content of the first and second substances,
Figure FDA0002275850840000037
ωt、κtcompensating an angle for the attitude to be solved;
Figure FDA0002275850840000038
Figure FDA0002275850840000039
Figure FDA00022758508400000310
a relative attitude concept is introduced, and the influence of high-frequency attitude errors on a geometric positioning model is eliminated by solving the relative attitude;
let A1 represent the true attitude during satellite imaging, A0 represent the attitude of satellite upload and download; if at the previous time t0By solving for the subsequent time t only, with reference to the positioning model of (2)1Relative to t0The attitude compensation matrix of (2) eliminates the intersection error of the homonymous points, and the actually solved and recovered attitude is A2; by comparison, the process of recovering the posture A2 can recover the relative relationship between the postures at different moments; when geometric processing is performed by using the A0 attitude, the attitude error is a random error; when geometric processing is performed by using the A2 attitude, the attitude error is a systematic error, and the pairFor the relative positioning accuracy of the image, a2 is equivalent to the true pose a 1; therefore, by recovering A2, not only the solution can be simplified, but also the high frequency attitude error can be solved;
when the time factor is not considered: according to the same name point
Figure FDA0002275850840000041
Calculation based on geometric positioning model and global SRTM-DEM
Figure FDA0002275850840000042
Obtaining a control point according to the corresponding ground object coordinates; when in use
Figure FDA0002275850840000043
If the number of control points on the row is more than or equal to 2, the constant bias matrix method can be used for solving t1Attitude compensation matrix R corresponding to timeoffset(ii) a When in use
Figure FDA0002275850840000044
Number of control points on line<2, the solution cannot be obtained;
when considering the time factor: the similarity degree of intersection errors of the homonymy points is used as a classification measure, all homonymy points are grouped, and the attitude compensation matrix is solved group by group, so that the influence of mismatching is reduced; n pairs of same-name points (x) are obtained on CCD1 and CCD2ccd1,yccd1,xccd2,yccd2)iI is not more than N, and yccd1<yccd2Grouping all the points with the same name according to the similarity, and solving the attitude compensation matrix group by group according to the specific solving flow as follows:
(1) all the same-name point pairs are pressed into yccd1Arranging in an ascending order;
(2) calculating (x) based on geometric positioning model and SRTM-DEM dataccd2,yccd2)iCorresponding ground feature coordinates (X, Y, Z)i
(3) Solving for (X, Y, Z) based on a geometric orientation modeliCorresponding image point coordinates (x ') on CCD 1'ccd1,y'ccd1)i
(4) Calculating the homonymous point intersection error of the CCD 1:
(Δx,Δy)i=(x'ccd1-xccd1,y'ccd1-yccd1)i,i≤N (6)
substituting the result of the formula (6) into the formula (3) to obtain the intersection error of the homonymous points of the CCD 1;
(5) comparing intersection errors of adjacent homonymous points according to the following formula, and if the difference value is within the range of a threshold value d, enabling the two pairs of homonymous points to be in the same group; otherwise, creating a new group;
Figure FDA0002275850840000051
(6) if the number of members in a certain group is less than a preset value, deleting the group;
(7) when a group contains m members, (x)ccd1,yccd1,xccd2,yccd2)iI is less than or equal to m, the method in the step 3 is utilized to solve the attitude compensation matrix R of the groupoffsetThe scope of the attitude compensation matrix is defined according to equation (8); traversing and solving the attitude compensation matrixes of all the groups to obtain the attitude compensation matrix of each group and the action domain corresponding to the attitude compensation matrix of each group;
min({yccd1}j,j≤m)=ymin≤y≤ymax=max({yccd1}j,j≤m) (8)
(8) traversing all the grouped attitude compensation matrixes and all action domains obtained in the step (7) for the attitude at the time t, and updating attitude data by using the attitude compensation matrixes at the time t contained in the action domains according to the following formula;
Figure FDA0002275850840000052
if the scope does not contain the attitude compensation matrix at the time t, adopting the most adjacent front and rear groups of attitude compensation angles to obtain an updated attitude compensation matrix according to linear interpolation;
the updated attitude compensation matrix is substituted into the formula (4) to eliminate the high-frequency error.
2. The high-frequency error detection method of the parallel observation-based satellite-borne push-broom optical sensor, as claimed in claim 1, wherein: step 4, carrying out high-frequency attitude correction on the original image to obtain a corrected image; respectively carrying out road straightness verification and relative registration precision verification on the original image and the corrected image; and comparing and evaluating the verification result, wherein the evaluation method is a road straightness evaluation method for evaluating vertical rail attitude jitter:
due to the characteristic of linear array push-broom instantaneous imaging, vertical rail high-frequency attitude jitter can cause the distortion of linear ground objects along the rail direction; selecting a linear ground object along the direction of the rail as an evaluation target, and extracting sub-pixel coordinates of the edge of the linear ground object by using a Sobel operator; and finally, in order to check the correction effect of the vertical rail high-frequency attitude shake, linear fitting is carried out on the edge coordinates of the linear ground object, and the straightness of the road edges of the images before correction and the images after correction is evaluated through fitting residual errors.
3. The high-frequency error detection method of the parallel observation-based satellite-borne push-broom optical sensor, as claimed in claim 1, wherein: step 4, carrying out high-frequency attitude correction on the original image to obtain a corrected image; respectively carrying out road straightness verification and relative registration precision verification on the original image and the corrected image; and comparing and evaluating the verification result, wherein the evaluation method is a relative registration error evaluation method for evaluating the elimination effect of the shake along the axial posture:
when the selected DEM data is derived from 30m ground elevation data of the SRTM-DEM, the elevation accuracy is not high; in order to avoid the influence of elevation accuracy, the intersection angle between the two scenes matched with each other should be as small as possible; the intersection angle is less than or equal to 2 degrees, and the influence of elevation errors on the intersection along the track is negligible; when the intersection angle is larger than 2 degrees, COSI-Corr software is adopted to carry out high-precision matching and calculate the residue, and a relatively obvious digital indication model (DSM) characteristic is shown in the vertical rail direction; when the intersection angle is more than 2 degrees, the evaluation method for eliminating the high-frequency error in the along-track direction is as follows:
the basic flow of verification of the above evaluation results is as follows:
1) selecting original images A and B, wherein the number of lines of each image is not less than 1000, bringing the posture parameters after high-frequency correction into a geometric model, and producing the images to obtain image data;
2) matching the original images A and B by using a high-precision matching algorithm to generate the same-name points (xbk, ybk) of each pixel (xak, yak) (k is 1, 2, … n) of the image A and the corresponding image B;
3) according to the intersection of the geometric model of the image A and the SRTM-DEM, calculating the ground object coordinates (Latk, Lonk, Hk) of each pixel point (xak, yak); then, according to the geometric model of the image B, the image B is intersected with the SRTM-DEM, and the ground object coordinates (Latk, Lonk and Hk) corresponding to the pixel coordinates (xmk and ymk) of the image B are calculated, so that the positioning residual error of the pixel is (xbk-xmk, ybk-ymk); drawing a relative residual error graph along the direction of the track after high-frequency correction according to the pixel coordinate position of each pixel point (xak, yak);
4) performing the same treatment on the posture before high-frequency correction according to the method of the steps 1) to 3), and drawing an axial relative residual error map of the high-frequency correction front along the track;
5) and (4) comparing the two residual error maps in the step 3) and the step 4), and evaluating the elimination of the high-frequency error.
CN201910064610.0A 2019-01-23 2019-01-23 Satellite-borne push-broom optical sensor high-frequency error elimination method based on parallel observation Active CN109741381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910064610.0A CN109741381B (en) 2019-01-23 2019-01-23 Satellite-borne push-broom optical sensor high-frequency error elimination method based on parallel observation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910064610.0A CN109741381B (en) 2019-01-23 2019-01-23 Satellite-borne push-broom optical sensor high-frequency error elimination method based on parallel observation

Publications (2)

Publication Number Publication Date
CN109741381A CN109741381A (en) 2019-05-10
CN109741381B true CN109741381B (en) 2020-07-03

Family

ID=66365802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910064610.0A Active CN109741381B (en) 2019-01-23 2019-01-23 Satellite-borne push-broom optical sensor high-frequency error elimination method based on parallel observation

Country Status (1)

Country Link
CN (1) CN109741381B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10214328A (en) * 1997-01-30 1998-08-11 Hitachi Software Eng Co Ltd Picture information analyzing device and storage medium with picture information analyzing program
CN1290928A (en) * 1999-09-14 2001-04-11 Lg电子株式会社 Method and equipment for slide optic sensor
CN1426021A (en) * 2002-12-19 2003-06-25 上海交通大学 Non-linear registration method for remote sensing image
CN1987896A (en) * 2005-12-23 2007-06-27 中国科学院中国遥感卫星地面站 High resolution SAR image registration processing method and system
CN102213762A (en) * 2011-04-12 2011-10-12 中交第二公路勘察设计研究院有限公司 Method for automatically matching multisource space-borne SAR (Synthetic Aperture Radar) images based on RFM (Rational Function Model)
CN103886569A (en) * 2014-04-03 2014-06-25 北京航空航天大学 Parallel and matching precision constrained splicing method for consecutive frames of multi-feature-point unmanned aerial vehicle reconnaissance images
CN103983343A (en) * 2014-05-29 2014-08-13 武汉大学 Satellite platform chattering detection method and system based on multispectral image
CN104931022A (en) * 2015-04-21 2015-09-23 国家测绘地理信息局卫星测绘应用中心 Satellite image three-dimensional area network adjustment method based on satellite-borne laser height measurement data
CN105486312A (en) * 2016-01-30 2016-04-13 武汉大学 Star sensor and high-frequency angular displacement sensor integrated attitude determination method and system
CN106959454A (en) * 2017-03-20 2017-07-18 上海航天控制技术研究所 A kind of flutter inversion method based on numeric field TDI and continuous multiple line battle array imaging pattern
CN107564057A (en) * 2017-08-08 2018-01-09 武汉大学 Take the in-orbit geometric calibration method of high rail level battle array optical satellite of Atmosphere Refraction correction into account
CN108762324A (en) * 2018-05-23 2018-11-06 深圳市道通智能航空技术有限公司 Horizontal stage electric machine angle and angular speed evaluation method, device, holder and aircraft

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101442619B (en) * 2008-12-25 2010-08-18 武汉大学 Method for splicing non-control point image
CA2703314A1 (en) * 2009-05-06 2010-11-06 University Of New Brunswick Method of interest point matching for images
CN101907459B (en) * 2010-07-12 2012-01-04 清华大学 Monocular video based real-time posture estimation and distance measurement method for three-dimensional rigid body object

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10214328A (en) * 1997-01-30 1998-08-11 Hitachi Software Eng Co Ltd Picture information analyzing device and storage medium with picture information analyzing program
CN1290928A (en) * 1999-09-14 2001-04-11 Lg电子株式会社 Method and equipment for slide optic sensor
CN1426021A (en) * 2002-12-19 2003-06-25 上海交通大学 Non-linear registration method for remote sensing image
CN1987896A (en) * 2005-12-23 2007-06-27 中国科学院中国遥感卫星地面站 High resolution SAR image registration processing method and system
CN102213762A (en) * 2011-04-12 2011-10-12 中交第二公路勘察设计研究院有限公司 Method for automatically matching multisource space-borne SAR (Synthetic Aperture Radar) images based on RFM (Rational Function Model)
CN103886569A (en) * 2014-04-03 2014-06-25 北京航空航天大学 Parallel and matching precision constrained splicing method for consecutive frames of multi-feature-point unmanned aerial vehicle reconnaissance images
CN103983343A (en) * 2014-05-29 2014-08-13 武汉大学 Satellite platform chattering detection method and system based on multispectral image
CN104931022A (en) * 2015-04-21 2015-09-23 国家测绘地理信息局卫星测绘应用中心 Satellite image three-dimensional area network adjustment method based on satellite-borne laser height measurement data
CN105486312A (en) * 2016-01-30 2016-04-13 武汉大学 Star sensor and high-frequency angular displacement sensor integrated attitude determination method and system
CN106959454A (en) * 2017-03-20 2017-07-18 上海航天控制技术研究所 A kind of flutter inversion method based on numeric field TDI and continuous multiple line battle array imaging pattern
CN107564057A (en) * 2017-08-08 2018-01-09 武汉大学 Take the in-orbit geometric calibration method of high rail level battle array optical satellite of Atmosphere Refraction correction into account
CN108762324A (en) * 2018-05-23 2018-11-06 深圳市道通智能航空技术有限公司 Horizontal stage electric machine angle and angular speed evaluation method, device, holder and aircraft

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
"Digital Satellite Image Mapping of Antarctica";Jorn Sievers;《Polarforschung》;19891231;全文 *
"Geometric Calibration and Accuracy Assessment of ZiYuan-3 Multispectral Images";Yonghua Jiang;《IEEE Transactions on Geoscience and Remote Sensing》;20140731;全文 *
"推扫式光学卫星影像系统几何校正产品的3维几何模型及定向算法研究";张过;《测绘学报》;20100228;全文 *
"推扫式光学卫星遥感影像产品三维几何模型研究及应用";张过;《遥感信息》;20110228;全文 *
"线推扫式高光谱相机侧扫成像几何校正";王书民;《红外与激光工程》;20140228;全文 *
"线阵推扫式影像近似几何校正算法的精度比较";朱述龙;《遥感学报》;20040531;全文 *
《Improvement and Assessment of the Geometric Accuracy of Chinese High-Resolution Optical Satellites》;Yonghua Jiang;《IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing》;20151231;全文 *

Also Published As

Publication number Publication date
CN109741381A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
Li et al. Rigorous photogrammetric processing of HiRISE stereo imagery for Mars topographic mapping
CN103115614B (en) Associated parallel matching method for multi-source multi-track long-strip satellite remote sensing images
Toutin et al. DEM generation with ASTER stereo data
US20050147324A1 (en) Refinements to the Rational Polynomial Coefficient camera model
Conte et al. Structure from Motion for aerial thermal imagery at city scale: Pre-processing, camera calibration, accuracy assessment
CN112597428B (en) Flutter detection correction method based on beam adjustment and image resampling of RFM model
CN112419380B (en) Cloud mask-based high-precision registration method for stationary orbit satellite sequence images
Gong et al. A detailed study about digital surface model generation using high resolution satellite stereo imagery
CN109671109B (en) Dense point cloud generation method and system
Zhang et al. SAR mapping technology and its application in difficulty terrain area
Shi et al. Fusion of a panoramic camera and 2D laser scanner data for constrained bundle adjustment in GPS-denied environments
Toutin et al. GCP requirement for high resolution satellite mapping
Li et al. Photogrammetric processing of Tianwen-1 HiRIC imagery for precision topographic mapping on Mars
CN111161186B (en) Push-broom type remote sensor channel registration method and device
CN109741381B (en) Satellite-borne push-broom optical sensor high-frequency error elimination method based on parallel observation
CN110503604B (en) Aviation area array image real-time orthotropic splicing method based on high-precision POS
CN109886988B (en) Method, system, device and medium for measuring positioning error of microwave imager
CN114092534B (en) Hyperspectral image and laser radar data registration method and registration system
Zhang et al. Automatic processing of Chinese GF-1 wide field of View images
CN111222544B (en) Ground simulation test system for influence of satellite flutter on camera imaging
Wang et al. A method for generating true digital orthophoto map of UAV platform push-broom hyperspectral scanners assisted by lidar
Huang et al. Image network generation of uncalibrated UAV images with low-cost GPS data
Oh et al. Extraction of digital elevation model using stereo matching with slope-adaptive patch transformation
JP2004171413A (en) Digital image processor
CN110580679B (en) Mixed stitching method applied to large-area plane image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant