CN110211054B - Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor - Google Patents

Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor Download PDF

Info

Publication number
CN110211054B
CN110211054B CN201910350812.1A CN201910350812A CN110211054B CN 110211054 B CN110211054 B CN 110211054B CN 201910350812 A CN201910350812 A CN 201910350812A CN 110211054 B CN110211054 B CN 110211054B
Authority
CN
China
Prior art keywords
virtual
camera
coordinate system
real
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910350812.1A
Other languages
Chinese (zh)
Other versions
CN110211054A (en
Inventor
孙向东
张过
蒋永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910350812.1A priority Critical patent/CN110211054B/en
Publication of CN110211054A publication Critical patent/CN110211054A/en
Application granted granted Critical
Publication of CN110211054B publication Critical patent/CN110211054B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Graphics (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method for manufacturing an undistorted image of a satellite-borne push-broom optical sensor, and belongs to the technical field of satellite image manufacturing. The method completes the construction of a virtual camera geometric positioning model by resolving the orbit and attitude state of the satellite, and finally realizes the generation of an undistorted image by a virtual re-imaging method. The definition of the real body coordinate system cannot be known exactly, the body coordinate system is defined according to a certain rule, then the attitude and the inner orientation elements are distinguished in the same mode for each scene image, the final inner orientation element is guaranteed to be stable, and the calculation of the attitude and the inner orientation elements can be achieved. And then, completing the construction of a virtual camera geometric positioning model according to the track and the posture state, and finally realizing distortion-free image production by utilizing virtual resampling. According to the invention, the ground control point library is established, and the point location measurement is assisted in a semi-automatic mode, so that the control points are rapidly acquired, the time cost is saved, and the efficiency is improved.

Description

Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor
Technical Field
The invention relates to a method for manufacturing an undistorted image of a satellite-borne push-broom optical sensor, and belongs to the technical field of satellite image manufacturing.
Background
Even if the satellite image completes on-orbit geometric calibration and high-frequency error elimination, the influence of lens distortion and high-frequency error on the image still exists; the internal distortion of the image can seriously hinder the subsequent satellite image processing and reduce the quality of satellite products.
According to the study of the prior art, F.D. Lussy et al propose that the superposition of a plurality of sine/cosine wave functions is used for fitting satellite high-frequency jitter, waveform functions with different frequencies are decomposed and solved from registration errors calculated by parallel observation, and the parallel observation of SPOT5 full-color and multispectral CCD linear arrays is used for eliminating attitude jitter on the basis of a similar principle, so that a good-precision DTM is obtained under the condition of a small base-height ratio; and the HiRISE attitude jitter is eliminated by adopting parallel observation in the same way of S.Mattson, A.Boyd and the like, and finally an almost undistorted image is obtained, and the DEM precision is improved from >5m to <0.5 m. But these methods are only directed to high frequency dithering of the platform,
disclosure of Invention
The technical problem to be solved by the invention is as follows: the method eliminates the complex deformation in the image caused by high-frequency error, camera distortion and the like, realizes the distortionless image production of the satellite-borne push-broom optical sensor, and provides a distortionless image production method of the satellite-borne push-broom optical sensor.
The purpose of the invention is realized by the following technical scheme:
a method for manufacturing a distortion-free image of a satellite-borne push-broom optical sensor comprises the steps of solving the orbit and attitude state of a satellite to complete the construction of a virtual camera geometric positioning model, and finally realizing the generation of the distortion-free image through a virtual re-imaging method. After the body coordinate system is defined, the attitude and the internal orientation elements are distinguished in the same way for each scene image, and the final internal orientation element is ensured to be stable, so that the calculation of the attitude and the internal orientation element can be realized. And then, completing the construction of a virtual camera geometric positioning model according to the track and the posture state, and finally realizing distortion-free image production by utilizing virtual resampling.
A method for manufacturing an undistorted image of a satellite-borne push-broom optical sensor comprises the following steps:
step 1, recovering the position and the attitude of the light during shooting by resolving the satellite orbit, namely realizing the positioning and the orientation of the light.
The method comprises the steps of solving out geodetic coordinates of two intersection points through an inverse calculation form of a rational function model, wherein the two intersection points are intersection points of light rays of the ground and the highest elevation surface and the lowest elevation surface, and the method is shown as follows:
Figure BDA0002043882300000021
P5、P6、P7、P8solving a polynomial of geodetic coordinates; (x, y) is the image point coordinate of the intersection point, H is the elevation, Lat is the dimensional coordinate of the intersection point in the geodetic coordinate system, and Lon is the longitude coordinate of the intersection point in the geodetic coordinate system.
The two intersection points of the ground under the geodetic coordinate system can be converted into the earth-centered rectangular coordinate system. When the object coordinates of the two intersection points are determined, the direction of the photographing light under the earth center rectangular coordinate system is the difference of the positions of the two intersection points.
Firstly, the highest elevation plane and the lowest elevation plane are respectively crossed by photographic light OA to form two points, and the geodetic coordinates of the two points are respectively [ Lat1,Lon1,H1]And [ Lat ]2,Lon2,H2]. The coordinates of the image points of the two intersection points are unified into (x, y). [ Lat ]1,Lon1]And [ Lat ]2,lon2]And (4) carrying out inverse calculation according to the formula (1). Correspondingly, the coordinates of the two intersection points are converted from a geodetic coordinate system to a geocentric rectangular coordinate system of the earth, which are respectively [ X ]1,Y1,Z1]And [ X ]2,Y2,Z2]。
The second method, the method for solving the earth-center rectangular coordinate system, can also be obtained by a rigorous model, as shown in the following formula:
Figure BDA0002043882300000022
wherein m is a proportionality coefficient; Ψx、ΨyThe pointing angle is obtained by decomposing the pointing of the imaging light of the probe element along the track and the vertical track; r (t) is a coordinate transformation matrix of a geodetic coordinate system. [ X (t), Y (t), Z (t)]Is the coordinate of the phase center in the earth-centered rectangular coordinate system.
Because the intersection points of the highest elevation plane and the lowest elevation plane have the same image point position, the two intersection points correspond to [ X (t), Y (t), Z (t)]R (t) and [ tan (psi) ]x),tan(ψy)]Is consistent but [ XS,YS,ZS]And m are different, so a rigorous imaging model of the two intersections is expressed as follows:
Figure BDA0002043882300000031
wherein [ X ]i,Yi,Zi]Is a rectangular coordinate system of the earth center. Then, the two intersection points are driven into the formula (3) to be subjected to quotient so as to obtain the following formula:
Figure BDA0002043882300000032
in the above-mentioned formula, the compound of formula,
Figure BDA0002043882300000033
the following formula can be obtained by rewriting the formula (4)
Figure BDA0002043882300000034
Rewriting formula (5) and erasing the unknown number k yields the following formula:
Figure BDA0002043882300000035
the variant of formula (6) is as follows:
Figure BDA0002043882300000036
there are three unknowns in equation (7), but one ray can only yield two independent equations, so to solve for position [ X (t), Y (t), Z (t) ], another ray is needed to repeat method two, and then solve equation (7) together.
And 2, attitude calculation, namely calculating a rotation matrix converted from the coordinate system of the body to the earth-center rectangular coordinate system.
The concept of an equivalent body coordinate system is used to resolve the correlation of the pose and the internal orientation elements.
Defining an equivalent ontology coordinate system: the X axis points to the flight direction; the Z axis points to the ground direction and is the resultant vector of the unit vectors of the two photographic rays, and the directions of the two photographic rays can be obtained through the step one, namely [ X ]2-X1,Y2-Y1,Z2-Z1]The orientation of the Z axis in the Earth's center rectangular coordinate system is shown as follows:
Figure BDA0002043882300000041
the X-axis of the equivalent body coordinate system is perpendicular to the plane OAB. The orientation of the X axis in a rectangular coordinate system of the earth's center is as follows:
Figure BDA0002043882300000042
the Y axis of the equivalent body coordinate system is determined by the X axis and the Z axis according to the right-hand selection criterion, and is shown as the following formula:
Figure BDA0002043882300000043
after determining the directions of the three coordinate axes of the equivalent body coordinate system in the earth geocentric coordinate system, R (t) is constructed according to the following formula:
Figure BDA0002043882300000044
the rotation matrix is converted into a quaternion form or an Euler angle form, and can be used for describing the attitude state of the satellite and realizing attitude calculation.
And 3, resolving the internal orientation elements.
And converting the direction of the photographing light from the earth center rectangular coordinate system to the equivalent body coordinate system according to the direction obtained in the step 1 and the posture obtained in the step 2. The solving process is carried out by shooting rays.
The pointing direction of the sensor probe in the body coordinate system is expressed as follows:
Figure BDA0002043882300000045
wherein [ X, Y, Z]bodyIndicating the direction of the photographing light in the body coordinate system. According to the definition of the pointing angle of the probe element, the inner orientation element of the current sensor probe element is independently expressed as the following formula:
Figure BDA0002043882300000046
and 4, constructing a square element model in the virtual camera.
The virtual camera is defined to exist, a platform and a real satellite synchronously shoot the same area at the almost same orbit position and posture, the camera is not influenced by lens distortion, a linear array is an ideal straight line, the camera is stably pushed and swept, exposure imaging is carried out according to constant integral time, and an image acquired by the virtual camera is an undistorted image. By establishing the imaging geometric relationship between the virtual camera and the real camera, the images acquired by the real camera are resampled, and the complex deformation in the real images can be eliminated.
Because the virtual camera platform and the real satellite synchronously shoot at almost the same orbit and posture, and the stable operation of the virtual camera platform is not influenced by posture jitter; therefore, polynomial fitting is carried out on the orbit and attitude discrete data calculated by the real satellite, and the fitting polynomial is used as an attitude and orbit model of the virtual camera platform.
While for the line scan time, the virtual camera images the ground exposure with a constant integration time; when the virtual camera and the real camera are at t0And starting up the camera for imaging at the same time, wherein the imaging time t of the imaging line l of the virtual camera is as follows:
t=t0+τ·l (14)
where τ is the virtual camera integration time, which is the average integration time of the real camera.
Images acquired by a plurality of CCD linear arrays are discontinuous, overlapping pixels between adjacent CCDs need to be removed through splicing processing, continuous images are acquired through dislocation along a track, and the images are respectively used for realizing a panchromatic single camera, a spliced view field panchromatic dual camera, multi-spectral band registration and single-satellite dual camera splicing through the position relation of four common virtual CCD arrays and a real CCD array; the virtual camera only comprises a single CCD with ideal linear arrangement, and the obtained images are continuous images; because the resolution of the images acquired by the virtual camera is similar to that acquired by the real camera, the main distances of the virtual camera and the real camera are the same; meanwhile, in order to reduce the pointing difference between the real imaging light and the virtual imaging light, the camera coordinates (x) of each probe element of the real camera are usedc,yc) As an observed value, fitting a linear equation x of the virtual CCD in a camera coordinate systemc=ayc+ b, using the square pixel model as the virtual camera interior pixel model.
And 5, realizing virtual re-imaging. The virtual re-imaging method is used for re-sampling the image acquired by the real camera on the basis of establishing the imaging geometric relationship between the virtual camera and the real camera to obtain a virtual undistorted image.
Step 5.1: constructing a virtual camera geometric positioning model by using the inside square element model of the virtual camera in the step 4;
step 5.2: calculating a pixel (X, Y) on the virtual image by using the virtual camera geometric positioning model established in the step 5.1, wherein the ground coordinate corresponding to the pixel is (X, Y, Z);
step 5.3: back projecting (X, Y, Z) to real image coordinates (X ', Y') according to a real camera geometric location model;
step 5.4: resampling to obtain (x ', y') gray values, and assigning the gray values to (x, y) pixels;
step 5.5: traversing all pixels on the virtual image, repeating the step 5.2-5.4, and generating a whole scene image to obtain a distortion-free image;
step 5.6: and generating an RPC file corresponding to the undistorted image based on the virtual camera geometric positioning model, namely completing the production of the undistorted image.
The resampling in the step 5.4 is based on SRTM-DEM data;
due to the fact that the imaging light angles of the virtual camera and the real camera are different, projection difference is introduced due to the object space point elevation error in the resampling process. Observing the same point (X, Y, Z) on the ground by the real satellite and the virtual satellite to respectively obtain image coordinates (X, Y) and (X ', Y'), as shown in the following formula, theta0、θ1The included angles of imaging of real light and virtual light are respectively, delta h is an object space point elevation error, and delta x is a projection difference introduced by the delta h in the re-imaging process. The geometrical relationship can be known as follows:
Δx=Δh(tanθ0-tanθ1) (15)
according to the position of the virtual CCD, theta0And theta1The difference is small; when SRTM-DEM data are adopted for an optical satellite camera researched by the method, projection difference caused by elevation errors in a re-imaging process is ignored.
Has the advantages that:
the invention discloses a method for manufacturing an undistorted image of a satellite-borne push-broom optical sensor, which is characterized in that a ground control point library is established, point position measurement is assisted in a semi-automatic mode, control points are rapidly acquired, time cost is saved, and efficiency is improved;
2, the invention discloses a method for manufacturing a distortion-free image of a satellite-borne push-broom optical sensor, which compensates satellite orbit and attitude errors through a bias matrix, improves the external orientation precision and realizes the generation of the distortion-free image of the push-broom optical sensor.
Drawings
Fig. 1 shows the relative position relationship between the virtual camera CCD and the real camera CCD in step 4 of the present invention, wherein: (a) a panchromatic single camera, (b) a visual field splicing panchromatic double camera, (c) a multispectral single camera, and (d) a visual field splicing multispectral double camera.
FIG. 2 is a schematic view of step 5 virtual re-imaging of the present invention;
FIG. 3 is a projected differential view of elevation error introduced during step 5 virtual re-imaging in accordance with the present invention;
FIG. 4 XX6A Artificial marker imaging a "warped" partial view;
FIG. 5 illustrates an ideal undistorted image generation method;
FIG. 6 is a method for generating an ideal undistorted image.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Two images of the yunnan area taken on 12/9 th 2014 and 3/1/2015 of XX6A were used as experimental data. The imaging area is mainly mountainous, the average height is 2854.5m, and the maximum height difference is about 1698 m. A plurality of artificial square markers are distributed in the image, but because the XX6A satellite adopts a CAST2000 small satellite platform, the stability is low, and the imaged square markers are distorted due to the influence of platform shaking in the imaging process, as shown in fig. 4:
the XX6A full-color B camera comprises 4 CCD linear arrays, wherein 4 CCDs are arranged in an arch shape according to internal orientation elements obtained by geometric calibration, adjacent CCDs are overlapped by about 200 pixels and are staggered by about 2 pixels along the track, and the imaging time interval of the same-name point is observed in parallel by about 0.0003 s.
Example 1:
the method for manufacturing the distortion-free image of the satellite-borne push-broom optical sensor disclosed by the embodiment comprises the following specific implementation steps:
and step 1, resolving the satellite orbit and the attitude.
The calculation of the real geometric positioning model of the satellite images can be realized by utilizing a rational function model. The rational function model is used as a high-degree fitting of the rigorous imaging model, and the rigorous imaging model can be realized. The main steps are to recover the position and posture of the light during photography, i.e. to position and orient the light. The earth coordinate of the intersection point of the light of the ground and the highest elevation surface and the lowest elevation surface can be calculated through the inverse calculation form of the rational function model, and the two intersection points of the ground under the earth coordinate system can be converted into the earth-centered rectangular coordinate system. When the object coordinates of the two intersection points are determined, the direction of the photographing light under the earth center rectangular coordinate system is the difference of the positions of the two intersection points.
First, the maximum elevation plane and the minimum elevation plane are crossed by a photographic light OA at two points, and the geodetic coordinates are [ Lat1, Lon1, H1] and [ Lat2, Lon2, H2], respectively. The coordinates of the image points of the two points are (x, y). Both [ Lat1, Lon1] and [ Lat2, Lon2] can be solved by inverse calculation according to the above formula. And correspondingly, the coordinates of the earth coordinate system are converted from a geodetic coordinate system to an earth-geocentric rectangular coordinate system, namely [ X1, Y1, Z1] and [ X2, Y2 and Z2 ]. This process can also be solved by a rigorous model, as shown below.
Figure BDA0002043882300000081
Because the intersection points of the two elevation planes have the same image point position, the two points correspond to [ X (t), Y (t), Z (t)]R (t) and [ tan (psi) ]x),tan(ψy)]Is consistent but [ XS,YS,ZS]And m are different, so a rigorous imaging model of the two intersections can be expressed as follows.
Figure BDA0002043882300000082
Wherein [ X ]i,Yi,Zi]Is a rectangular coordinate system of the earth center. The quotient is obtained after the intersection point of the two height planes is substituted into the equation
Figure BDA0002043882300000083
In the above-mentioned formula, the compound of formula,
Figure BDA0002043882300000084
the above formula can be rewritten to obtain the following formula
Figure BDA0002043882300000085
The above formula is rewritten and the unknown k is erased to obtain the following formula.
Figure BDA0002043882300000086
To obtain the following formula
Figure BDA0002043882300000087
There are three unknowns in the above formula, but one ray can only obtain two independent equations, so in order to solve the positions [ x (t), y (t), z (t) ], another ray is needed to solve the above formula, and the process is a linear solving process and does not need initial value and iteration. The orbit solution can be completed by this.
And 2, resolving the attitude after resolving the orbit. And (4) attitude calculation, namely calculating a rotation matrix converted from the coordinate system of the body to the earth-center rectangular coordinate system. Since the actual body coordinate system cannot be obtained by the rational function model, the correlation of the orientation and the internal orientation elements is solved using the concept of the equivalent body coordinate system.
The equivalent body coordinate system is designed such that the Z-axis points in the ground direction, which is the sum of the unit vectors of the rays OA and OB. The OA and OB directions can thus be solved by the position of the above formula, i.e. [ X ]2-X1,Y2-Y1,Z2-Z1]The orientation of the Z axis under the earth-center rectangular coordinate system is shown as follows:
Figure BDA0002043882300000091
the X-axis of the equivalent body coordinate system points in the flight direction, i.e. perpendicular to the plane OAB. The orientation of the earth in a rectangular coordinate system of the earth center is shown as follows:
Figure BDA0002043882300000092
the Y axis of the equivalent body coordinate system is determined by the X axis and the Z axis according to the right-hand selection criterion, and is shown as the following formula:
Figure BDA0002043882300000093
after determining the directions of the coordinate axes of the three equivalent body coordinate systems in the earth-centered coordinate system, r (t) can be constructed according to the following formula:
Figure BDA0002043882300000094
the rotation matrix can be converted into a quaternion form or an Euler angle form and is used for describing the attitude state of the satellite.
And 3, resolving the internal orientation element after resolving the track and attitude state. The direction of the ray in the earth's geocentric rectangular coordinate system can be obtained from a rational function model. Since the internal orientation element only represents the direction of the photographing light in the body coordinate system, the direction of the photographing light can be converted from the earth-center rectangular coordinate system to the equivalent body coordinate system by means of the solved attitude. It is worth noting that this method can only recover the relative internal orientation elements, because the equivalent body coordinate system is introduced to solve the correlation between the pose and the internal orientation elements when solving the pose. The solving process can be carried out by shooting rays, and the inner orientation elements are represented by the form of probe element pointing angles.
The pointing direction of the sensor probe A in the body coordinate system is expressed as follows:
Figure BDA0002043882300000101
wherein [ X, Y, Z]bodyIndicating the direction of the photographing light in the body coordinate system. According to the definition of the pointing angle of the probe element, the internal orientation element of the current probe element A can be independently expressed as the following formula:
Figure BDA0002043882300000102
calculating and comparing homonymy point intersection errors before and after the high-frequency error elimination, wherein the result is as follows:
Figure BDA0002043882300000103
and 4, constructing a geometric positioning model of the virtual camera. After on-orbit geometric calibration and high-frequency error elimination are completed, a distortion-free geometric positioning model can be obtained; the influence of lens distortion, high-frequency error and the like in the image is not eliminated, and the application effects of subsequent registration fusion and the like of the image are reduced due to complex deformation in the image. In the linear array push-broom imaging process, the main factors causing image complex deformation on the satellite comprise: 1) lens distortion and other internal orientation element errors cause the imaging light to deviate from an ideal direction, so that high-order deformation (image point deviation caused by reference of internal orientation elements) is caused; 2) the integral time jumps to change the resolution of the image along the track; 3) the "distortion" of the image caused by the pose jitter.
The virtual camera is assumed to exist, the platform and the real satellite synchronously shoot the same area at the almost same orbit position and posture, the camera is not influenced by lens distortion, the linear array is an ideal straight line, the camera is stably pushed and swept, exposure imaging is carried out according to constant integral time, and the acquired image is an undistorted image. By establishing the imaging geometric relationship between the virtual camera and the real camera, the images acquired by the real camera are resampled, and the complex deformation in the real images can be eliminated.
The key for constructing the geometric positioning model of the virtual camera is to establish a model of elements of platform orbit, attitude, line scanning time and inner orientation.
Because the virtual camera platform and the real satellite synchronously shoot at almost the same orbit and posture, and the stable operation of the virtual camera platform is not influenced by posture jitter; therefore, a polynomial fitting can be performed on the orbit and attitude discrete data downloaded from the real satellite, and the fitting polynomial is used as an attitude and orbit model of the virtual camera platform.
While for the line scan time, the virtual camera images the ground exposure with a constant integration time; suppose a virtual camera and a real camera are at t0And starting up the camera for imaging at the same time, wherein the imaging time t of the imaging line l of the virtual camera is as follows:
t=t0+ tau.l formula 2
Where τ is the virtual camera integration time, the average integration time of the real camera can be taken.
Generally, images acquired by a plurality of CCD linear arrays are discontinuous, and continuous images are acquired by removing overlapped pixels between adjacent CCDs through splicing processing and along rail dislocation; the virtual camera only comprises a single CCD with ideal linear arrangement, and the obtained images are continuous images; in order to enable the resolution of images acquired by the virtual camera and the real camera to be similar, the main distances of the virtual camera and the real camera are the same; meanwhile, in order to reduce the pointing difference between the real imaging light and the virtual imaging light, the camera coordinates (x) of each probe element of the real camera are usedc,yc) As an observed value, fitting a linear equation x of the virtual CCD in a camera coordinate systemc=ayc+ b, using the square pixel model as the virtual camera interior pixel model. Fig. 1 shows a schematic diagram of a positional relationship between four common virtual CCD arrays and a real CCD array, which are respectively used for implementing single-camera stitching, multi-spectral band registration, single-satellite dual-camera stitching, and finally generating a distortion-free image.
The ideal undistorted image generated by the method eliminates the complex deformation caused by camera distortion, high-frequency error and the like, and the internal precision of the spliced image is high; the internal precision of the image determines the application effects of subsequent registration and fusion. FIG. 5 shows the result of the XX10 full-color multi-spectral mosaic image produced by the method of the present invention; the ideal undistorted image registration points are uniformly distributed, and the final produced fused image has a good effect, as shown in fig. 6.
And 5, realizing virtual re-imaging. The virtual re-imaging algorithm re-samples the image acquired by the real camera by establishing the imaging geometric relationship between the virtual camera and the real camera to obtain a virtual undistorted image. The algorithm flow is as follows:
1) constructing a virtual camera geometric positioning model;
2) calculating the corresponding ground coordinates (X, Y, Z) of any pixel (X, Y) on the undistorted image by using the geometric positioning model established in the step 1);
3) back projecting (X, Y, Z) to real image coordinates (X ', Y') according to a real camera geometric location model;
4) resampling to obtain (x ', y') gray values, and assigning the gray values to (x, y) pixels;
5) traversing all pixels on the undistorted image, and repeating 2) -4) to generate a whole scene image;
6) and generating an RPC file corresponding to the virtual image based on the virtual camera geometric positioning model.
Due to the difference between the imaging light angles of the virtual camera and the real camera, a projection difference may be introduced in the re-imaging process due to the elevation error of the object space point. As shown by the following formula, θ0、θ1Imaging included angles of real light and virtual light are respectively formed, delta h is an object space point elevation error, and delta x is a projection difference introduced by the delta h in a re-imaging process; the geometric relationship in the figure shows that:
Δx=Δh(tanθ0-tanθ1) Formula 3
According to the position of the virtual CCD, theta0And theta1The difference is small; aiming at the domestic optical satellite camera researched by the method, when SRTM-DEM data are adopted, the projection difference caused by elevation errors in the re-imaging process can be ignored.
The method comprises the steps of utilizing control points of various scene images to conduct external orientation based on an image surface affine model of an undistorted image RPC, verifying relative positioning accuracy of the images through the orientation accuracy, and listing results in the following table (wherein Original represents a spliced image generated by utilizing Original attitude data, and Corrected represents an undistorted image generated after high-frequency errors are eliminated). The table shows that due to the existence of XX10 high-frequency errors, the relative positioning accuracy of the Original attitude is only in the magnitude of several or even more than ten pixels, positioning residual errors are randomly distributed, and the influence of the errors is difficult to eliminate by a conventional processing method, but the influence of the high-frequency errors is eliminated under the condition that extra control data are not needed, the relative positioning accuracy is improved to about 1 pixel, and the correctness of the method is verified.
Figure BDA0002043882300000121
Figure BDA0002043882300000131
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (3)

1. A method for manufacturing an undistorted image of a satellite-borne push-broom optical sensor is characterized by comprising the following steps: comprises the following steps;
step 1, recovering the position and the posture of light during shooting by resolving a satellite orbit, namely realizing the positioning and the orientation of the light;
the method comprises the steps of solving and calculating geodetic coordinates of two intersection points through an inverse calculation form of a rational function model, wherein the two intersection points are intersection points of light rays of the ground and a highest elevation surface and a lowest elevation surface, the geodetic coordinates are [ Lat, Lon, H ], and a calculation formula is shown as the following formula:
Figure FDA0002685575720000011
P5、P6、P7、P8solving a polynomial of geodetic coordinates; (x, y) is the image point coordinate of the intersection point, H is elevation, Lat is the latitude coordinate of the intersection point in a geodetic coordinate system, and Lon is the longitude coordinate of the intersection point in the geodetic coordinate system;
two ground intersection points under the geodetic coordinate system can be converted into the earth center rectangular coordinate system; after the object space coordinates of the two intersection points are determined, the direction of the photographing light under the earth geocentric rectangular coordinate system is the position difference of the two intersection points;
firstly, the highest elevation plane and the lowest elevation plane are respectively crossed at two points by photographic light OA, and the geodetic coordinates of the two crossing points are respectively [ Lat1,Lon1,H1]And [ Lat ]2,Lon2,H2],H1Is the elevation of the highest elevation plane, Lon1Longitude coordinates in the geodetic coordinate system of the intersection of OA and the highest elevation plane, Lat1Is the latitude coordinate of the intersection point of OA and the highest elevation plane in the geodetic coordinate system, H2Is the elevation of the lowest elevation plane, Lon2Longitude coordinates in the geodetic coordinate system of the intersection of OA and the lowest elevation plane, Lat2The latitude coordinate of the intersection point of the OA plane and the lowest elevation plane under the geodetic coordinate system;
the coordinates of the image points of the two intersection points are unified into (x, y); [ Lat ]1,Lon1]And [ Lat ]2,lon2]Solving by inverse calculation according to the formula (1); correspondingly, the coordinates of the two intersection points are converted from a geodetic coordinate system to a geocentric rectangular coordinate system of the earth, which are respectively [ X ]1,Y1,Z1]And [ X ]2,Y2,Z2];
Step 2, attitude calculation, namely, a rotation matrix converted from a coordinate system of the body to a rectangular coordinate system of the earth center is calculated;
solving the correlation of the attitude and the internal orientation elements by using the concept of an equivalent body coordinate system;
the equivalent body coordinate system is as follows: the X axis points to the flight direction; the Z axis points to the ground direction and is the resultant vector of unit vectors of the two photographing light rays; the directions of the two photographic rays can be obtained by said step 1, namely [ X ]2-X1,Y2-Y1,Z2-Z1]The orientation of the Z axis in the Earth's center rectangular coordinate system is shown as follows:
Figure FDA0002685575720000021
the X axis of the equivalent body coordinate system is vertical to the plane OAB; the orientation of the X axis in a rectangular coordinate system of the earth's center is as follows:
Figure FDA0002685575720000022
the Y axis of the equivalent body coordinate system is determined by the X axis and the Z axis according to the right-hand selection criterion, and is shown as the following formula:
Figure FDA0002685575720000023
after determining the directions of the three coordinate axes of the equivalent body coordinate system in the earth geocentric coordinate system, R (t) is constructed according to the following formula:
Figure FDA0002685575720000024
the R (t) is converted into a quaternion form or an Euler angle form, and can be used for describing the attitude state of the satellite and realizing attitude calculation;
step 3, resolving internal orientation elements;
converting the direction of the photographing light from the earth center rectangular coordinate system to the equivalent body coordinate system according to the direction obtained in the step 1 and the posture obtained in the step 2; the solving process is carried out by shooting light rays;
the pointing direction of the sensor probe in the body coordinate system is expressed as follows:
Figure FDA0002685575720000031
wherein [ X, Y, Z]bodyRepresenting the direction of the photographing light in the body coordinate system; according to the definition of the pointing angle of the probe element, the inner orientation element of the current sensor probe element is independently expressed as the following formula:
Figure FDA0002685575720000032
Figure FDA0002685575720000033
wherein, XbodyIs the X coordinate, Y coordinate in the body coordinate systembodyIs the Y coordinate, Z coordinate in the body coordinate systembodyIs a Y coordinate under a body coordinate system;
step 4, constructing an inner square element model of the virtual camera;
defining that a virtual camera exists, synchronously shooting the same area at the almost same orbit position and posture of a platform and a real satellite, stably pushing and sweeping the camera and exposing and imaging according to constant integral time as the camera is not influenced by lens distortion and a linear array is an ideal straight line, and regarding an image acquired by the virtual camera as a distortion-free image; the image acquired by the real camera is resampled by establishing the imaging geometric relationship between the virtual camera and the real camera, so that the complex deformation in the real image can be eliminated;
because the virtual camera platform and the real satellite synchronously shoot at almost the same orbit and posture, and the stable operation of the virtual camera platform is not influenced by posture jitter; therefore, polynomial fitting is carried out on the orbit and attitude discrete data solved by the real satellite, and the fitting polynomial is used as an attitude and orbit model of the virtual camera platform;
while for the line scan time, the virtual camera images the ground exposure with a constant integration time; when the virtual camera and the real camera are at t0And starting up the camera for imaging at the same time, wherein the imaging time t of the imaging line l of the virtual camera is as follows:
t=t0+τ·l (14)
wherein tau is the virtual camera integration time and is the average integration time of the real camera;
images acquired by a plurality of CCD linear arrays are discontinuous, overlapping pixels between adjacent CCDs need to be removed through splicing processing, continuous images are acquired through dislocation along a track, and the images are respectively used for realizing a panchromatic single camera, a spliced view field panchromatic dual camera, multi-spectral band registration and single-satellite dual camera splicing through the position relation of four common virtual CCD arrays and a real CCD array; the virtual camera only comprises a single CCD with ideal linear arrangement, and the obtained images are continuous images; because the resolution of the images acquired by the virtual camera is similar to that acquired by the real camera, the main distances of the virtual camera and the real camera are the same;
in the interior square pixel model of the virtual camera, in order to reduce the pointing difference between the real imaging light and the virtual imaging light, a camera coordinate system is first constructed, and the camera coordinates (x) of each probe element of the real camera are usedc,yc) As an observed value, a linear equation x of the virtual CCD under a camera coordinate system is further calculatedc=ayc+ b, linear equation x of the obtained virtual CCDc=ayc+ b adding the cubic element model in the virtual camera;
step 5, realizing virtual re-imaging; the virtual re-imaging method is characterized in that on the basis of establishing a virtual camera and real camera imaging geometric relationship, images acquired by a real camera are re-sampled to obtain virtual distortion-free images;
step 5.1: constructing a virtual camera geometric positioning model by using the inside square element model of the virtual camera in the step 4;
step 5.2: calculating any pixel (x) on the virtual image by using the virtual camera geometric positioning model established in the step 5.1virtual,yvirtual) Corresponding ground coordinates (X)virtual,Yvirtual,Zvirtual);
Step 5.3: from the true camera geometric orientation model, (X) isvirtual,Yvirtual,Zvirtual) Inverse projection to trueReal image coordinates (x ', y');
step 5.4: resampling to obtain (x ', y') gray values, assigning gray values to (x) as described in step 5.2virtual,yvirtual) A pixel;
step 5.5: traversing all pixels on the virtual image, repeating the step 5.2-5.4, and generating a whole scene image to obtain a distortion-free image;
step 5.6: and generating an RPC file corresponding to the undistorted image based on the virtual camera geometric positioning model, namely completing the production of the undistorted image.
2. A method for manufacturing an undistorted image of a satellite-borne push-broom optical sensor is characterized by comprising the following steps: comprises the following steps;
step 1, recovering the position and the posture of light during shooting by resolving a satellite orbit, namely realizing the positioning and the orientation of the light;
the method for solving the earth-center rectangular coordinate system can also be obtained through a rigorous model as shown in the following formula:
Figure FDA0002685575720000051
wherein m is a proportionality coefficient; Ψx、ΨyThe pointing angle is obtained by decomposing the pointing of the imaging light of the probe element along the track and the vertical track; r (t) is a coordinate transformation matrix of a geodetic coordinate system; [ X (t), Y (t), Z (t)]The coordinates of the phase center in the earth-center rectangular coordinate system are obtained;
because the intersection points of the highest elevation plane and the lowest elevation plane have the same image point position, the two intersection points correspond to [ X (t), Y (t), Z (t)]R (t) and [ tan (psi) ]x),tan(ψy)]Is consistent but [ XS,YS,ZS]And m are different, so a rigorous imaging model of the two intersections is expressed as follows:
Figure FDA0002685575720000052
wherein [ X ]i,Yi,Zi]Is a rectangular coordinate system of the earth center; then, the two intersection points are driven into the formula (3) to be subjected to quotient so as to obtain the following formula:
Figure FDA0002685575720000053
in the above-mentioned formula, the compound of formula,
Figure FDA0002685575720000061
the following formula can be obtained by rewriting the formula (4)
Figure FDA0002685575720000062
The earth geocentric rectangular coordinate system with the intersection point in the highest elevation plane is (X)1,Y1,Z1) The Earth's geocentric rectangular coordinate system at the lowest elevation plane is (X)2,Y2,Z2);
Rewriting formula (5) and erasing the unknown number k yields the following formula:
Figure FDA0002685575720000063
the variant of formula (6) is as follows:
Figure FDA0002685575720000064
there are three unknowns in equation (7), but one ray can only yield two independent equations, so to solve for position [ x (t), y (t), z (t) ], another ray is needed to repeat step 1, and then equation (7) is solved together;
step 2, attitude calculation, namely, a rotation matrix converted from a coordinate system of the body to a rectangular coordinate system of the earth center is calculated;
solving the correlation of the attitude and the internal orientation elements by using the concept of an equivalent body coordinate system;
the equivalent body coordinate system is as follows: the X axis points to the flight direction; the Z axis points to the ground direction and is the resultant vector of unit vectors of the two photographing light rays; the directions of the two photographic rays can be obtained by said step 1, namely [ X ]2-X1,Y2-Y1,Z2-Z1]The orientation of the Z axis in the Earth's center rectangular coordinate system is shown as follows:
Figure FDA0002685575720000071
the X axis of the equivalent body coordinate system is vertical to the plane OAB; the orientation of the X axis in a rectangular coordinate system of the earth's center is as follows:
Figure FDA0002685575720000072
the Y axis of the equivalent body coordinate system is determined by the X axis and the Z axis according to the right-hand selection criterion, and is shown as the following formula:
Figure FDA0002685575720000073
after determining the directions of the three coordinate axes of the equivalent body coordinate system in the earth geocentric coordinate system, R (t) is constructed according to the following formula:
Figure FDA0002685575720000074
the geodetic coordinate system coordinate transformation matrix R (t) is transformed into a quaternion form or an Euler angle form, and can be used for describing the attitude state of a satellite and realizing attitude calculation;
step 3, resolving internal orientation elements;
converting the direction of the photographing light from the earth center rectangular coordinate system to the equivalent body coordinate system according to the direction obtained in the step 1 and the posture obtained in the step 2; the solving process is carried out by shooting light rays;
the pointing direction of the sensor probe in the body coordinate system is expressed as follows:
Figure FDA0002685575720000075
wherein [ X, Y, Z]bodyRepresenting the direction of the photographing light in the body coordinate system; according to the definition of the pointing angle of the probe element, the inner orientation element of the current sensor probe element is independently expressed as the following formula:
Figure FDA0002685575720000081
Figure FDA0002685575720000082
wherein, XbodyIs the X coordinate, Y coordinate in the body coordinate systembodyIs the Y coordinate, Z coordinate in the body coordinate systembodyIs a Y coordinate under a body coordinate system;
step 4, constructing an inner square element model of the virtual camera;
defining that a virtual camera exists, synchronously shooting the same area at the almost same orbit position and posture of a platform and a real satellite, stably pushing and sweeping the camera and exposing and imaging according to constant integral time as the camera is not influenced by lens distortion and a linear array is an ideal straight line, and regarding an image acquired by the virtual camera as a distortion-free image; the image acquired by the real camera is resampled by establishing the imaging geometric relationship between the virtual camera and the real camera, so that the complex deformation in the real image can be eliminated;
because the virtual camera platform and the real satellite synchronously shoot at almost the same orbit and posture, and the stable operation of the virtual camera platform is not influenced by posture jitter; therefore, polynomial fitting is carried out on the orbit and attitude discrete data solved by the real satellite, and the fitting polynomial is used as an attitude and orbit model of the virtual camera platform;
while for the line scan time, the virtual camera images the ground exposure with a constant integration time; when the virtual camera and the real camera are at t0And starting up the camera for imaging at the same time, wherein the imaging time t of the imaging line l of the virtual camera is as follows:
t=t0in the expression of + τ · l (14), τ is a virtual camera integration time and is an average integration time of a real camera;
images acquired by a plurality of CCD linear arrays are discontinuous, overlapping pixels between adjacent CCDs need to be removed through splicing processing, continuous images are acquired through dislocation along a track, and the images are respectively used for realizing a panchromatic single camera, a spliced view field panchromatic dual camera, multi-spectral band registration and single-satellite dual camera splicing through the position relation of four common virtual CCD arrays and a real CCD array; the virtual camera only comprises a single CCD with ideal linear arrangement, and the obtained images are continuous images; because the resolution of the images acquired by the virtual camera is similar to that acquired by the real camera, the main distances of the virtual camera and the real camera are the same;
in the interior square pixel model of the virtual camera, in order to reduce the pointing difference between the real imaging light and the virtual imaging light, a camera coordinate system is first constructed, and the camera coordinates (x) of each probe element of the real camera are usedc,yc) As an observed value, a linear equation x of the virtual CCD under a camera coordinate system is further calculatedc=ayc+ b, linear equation x of the obtained virtual CCDc=ayc+ b adding the cubic element model in the virtual camera;
step 5, realizing virtual re-imaging; the virtual re-imaging method is characterized in that on the basis of establishing a virtual camera and real camera imaging geometric relationship, images acquired by a real camera are re-sampled to obtain virtual distortion-free images;
step 5.1: constructing a virtual camera geometric positioning model by using the inside square element model of the virtual camera in the step 4;
step 5.2: calculating any pixel (x) on the virtual image by using the virtual camera geometric positioning model established in the step 5.1virtual,yvirtual) Corresponding ground coordinates (X)virtual,Yvirtual,Zvirtual);
Step 5.3: from the true camera geometric orientation model, (X) isvirtual,Yvirtual,Zvirtual) Back projecting to the real image coordinates (x ', y');
step 5.4: resampling to obtain (x ', y ') gray values, assigning (x ') gray values tovirtual,yvirtual) A pixel;
step 5.5: traversing all pixels on the virtual image, repeating the step 5.2-5.4, and generating a whole scene image to obtain a distortion-free image;
step 5.6: and generating an RPC file corresponding to the undistorted image based on the virtual camera geometric positioning model, namely completing the production of the undistorted image.
3. The method for fabricating an undistorted image of a space-borne push-broom optical sensor as claimed in claim 1 or 2, wherein: step 5.4, the resampling is carried out based on SRTM-DEM data;
due to the difference of the imaging light angles of the virtual camera and the real camera, projection difference is introduced due to the elevation error of an object space point in the resampling process; the real satellite and the virtual satellite are opposite to the same point (X) on the groundvirtual,Yvirtual,Zvirtual) Observing to obtain pixel coordinates (x)virtual,yvirtual) And the real image coordinates (x ', y') are represented by the following formula0、θ1Imaging included angles of real light and virtual light are respectively formed, delta h is an object space point elevation error, and delta x is a projection difference introduced by the delta h in a re-imaging process; the geometrical relationship can be known as follows:
Δx=Δh(tanθ0-tanθ1) (15)
according to the position of the virtual CCD, theta0And theta1The difference is small; when SRTM-DEM data are adopted for an optical satellite camera researched by the method, projection difference caused by elevation errors in a re-imaging process is ignored.
CN201910350812.1A 2019-04-28 2019-04-28 Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor Active CN110211054B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910350812.1A CN110211054B (en) 2019-04-28 2019-04-28 Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910350812.1A CN110211054B (en) 2019-04-28 2019-04-28 Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor

Publications (2)

Publication Number Publication Date
CN110211054A CN110211054A (en) 2019-09-06
CN110211054B true CN110211054B (en) 2021-01-15

Family

ID=67786541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910350812.1A Active CN110211054B (en) 2019-04-28 2019-04-28 Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor

Country Status (1)

Country Link
CN (1) CN110211054B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127564B (en) * 2019-12-23 2023-02-28 中电科新型智慧城市研究院有限公司 Video image correction method based on geometric positioning model
CN111105380A (en) * 2020-02-07 2020-05-05 武汉玄景科技有限公司 Method and system for linear recovery of rigorous imaging model
CN111612693B (en) * 2020-05-19 2023-03-14 中国科学院微小卫星创新研究院 Method for correcting rotary large-width optical satellite sensor
CN112419380B (en) * 2020-11-25 2023-08-15 湖北工业大学 Cloud mask-based high-precision registration method for stationary orbit satellite sequence images
CN113393499B (en) * 2021-07-12 2022-02-01 自然资源部国土卫星遥感应用中心 Automatic registration method for panchromatic image and multispectral image of high-resolution seven-satellite
CN113536485B (en) * 2021-07-20 2022-12-06 中国科学院西安光学精密机械研究所 Ionosphere imaging detector image geographic coordinate calculating method
CN113989402A (en) * 2021-10-27 2022-01-28 中国科学院空天信息创新研究院 Virtual re-imaging image generation method
CN115046571B (en) * 2022-08-16 2023-01-10 成都国星宇航科技股份有限公司 Star sensor installation error correction method and device based on remote sensing image
CN115829879B (en) * 2022-12-15 2023-10-27 二十一世纪空间技术应用股份有限公司 Attitude quaternion processing method, device and equipment for agile satellite
CN116664430B (en) * 2023-05-30 2023-11-14 自然资源部国土卫星遥感应用中心 Method for improving geometric accuracy of large-range satellite image under ground-free control condition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image
CN105091906A (en) * 2015-06-30 2015-11-25 武汉大学 High-resolution optical push-broom satellite steady-state reimaging sensor calibration method and system
CN106403902A (en) * 2016-08-31 2017-02-15 武汉大学 Satellite-ground cooperative in-orbit real-time geometric positioning method and system for optical satellites
CN106895851A (en) * 2016-12-21 2017-06-27 中国资源卫星应用中心 A kind of sensor calibration method that many CCD polyphasers of Optical remote satellite are uniformly processed
CN109668579A (en) * 2019-01-23 2019-04-23 张过 Spaceborne push away based on angular displacement sensor clears off load high frequency error compensation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9324138B2 (en) * 2013-03-15 2016-04-26 Eric Olsen Global contrast correction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image
CN105091906A (en) * 2015-06-30 2015-11-25 武汉大学 High-resolution optical push-broom satellite steady-state reimaging sensor calibration method and system
CN106403902A (en) * 2016-08-31 2017-02-15 武汉大学 Satellite-ground cooperative in-orbit real-time geometric positioning method and system for optical satellites
CN106895851A (en) * 2016-12-21 2017-06-27 中国资源卫星应用中心 A kind of sensor calibration method that many CCD polyphasers of Optical remote satellite are uniformly processed
CN109668579A (en) * 2019-01-23 2019-04-23 张过 Spaceborne push away based on angular displacement sensor clears off load high frequency error compensation method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Compensation for Distortion of Basic Satellite Images Based on Rational Function Model;Wenchao Huang et al.;《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》;20161231;第9卷(第12期);第4355-4361页 *
Robust Approach for Recovery of Rigorous Sensor Model Using Rational Function Model;Wenchao Huang et al.;《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》;20160731;第54卷(第7期);第5767-5775页 *
基础遥感产品系统误差补偿方法研究;黄文超;《中国博士学位论文全文数据库 基础科学辑》;20180215;第2018年卷(第2期);第二、三章 *
资源三号卫星遥感影像高精度几何处理关键技术与测图效能评价方法;周平;《中国博士学位论文全文数据库 基础科学辑》;20170815;第2017年卷(第8期);摘要,第二、三章 *

Also Published As

Publication number Publication date
CN110211054A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110211054B (en) Method for manufacturing distortion-free image of satellite-borne push-broom optical sensor
CN109903352B (en) Method for making large-area seamless orthoimage of satellite remote sensing image
CN110500995B (en) Method for establishing high-resolution satellite image equivalent geometric imaging model by using RPC parameters
Di et al. A Self-Calibration Bundle Adjustment Method for Photogrammetric Processing of Chang $^{\prime} $ E-2 Stereo Lunar Imagery
Li et al. Rigorous photogrammetric processing of HiRISE stereo imagery for Mars topographic mapping
JP4851239B2 (en) Image processing apparatus and processing method thereof
CN103697864B (en) A kind of narrow visual field double camera image splicing method based on large virtual camera
CN106885585B (en) Integrated calibration method of satellite-borne photogrammetry system based on light beam adjustment
CN111784585B (en) Image splicing method and device, electronic equipment and computer readable storage medium
CN111612693B (en) Method for correcting rotary large-width optical satellite sensor
CN110853140A (en) DEM (digital elevation model) -assisted optical video satellite image stabilization method
Sai et al. Geometric accuracy assessments of orthophoto production from uav aerial images
JP4851240B2 (en) Image processing apparatus and processing method thereof
JP4809134B2 (en) Image processing apparatus and processing method thereof
CN109029379B (en) High-precision small-base-height-ratio three-dimensional mapping method
CN111127564A (en) Video image correction method based on geometric positioning model
Chen et al. Large-scale block bundle adjustment of LROC NAC images for Lunar South Pole mapping based on topographic constraint
Gong et al. Relative orientation and modified piecewise epipolar resampling for high resolution satellite images
JP4813263B2 (en) Image processing apparatus and processing method thereof
CN116597080A (en) Complete scene 3D fine model construction system and method for multi-source spatial data
Jacobsen Calibration of optical satellite sensors
CN111667533A (en) Method for obtaining strict imaging model parameters of satellite based on rational function model
CN105374009A (en) Remote sensing image splicing method and apparatus
Jacobsen Geometry of satellite images–calibration and mathematical models
Di et al. Co-registration of Chang’E-1 stereo images and laser altimeter data for 3D mapping of lunar surface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant