CN105547258B - Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars - Google Patents

Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars Download PDF

Info

Publication number
CN105547258B
CN105547258B CN201610052803.0A CN201610052803A CN105547258B CN 105547258 B CN105547258 B CN 105547258B CN 201610052803 A CN201610052803 A CN 201610052803A CN 105547258 B CN105547258 B CN 105547258B
Authority
CN
China
Prior art keywords
mrow
msub
msup
satellite
mtd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610052803.0A
Other languages
Chinese (zh)
Other versions
CN105547258A (en
Inventor
朱剑冰
赵娜
程博文
赵魏
赵峭
汪路元
郭坚
于龙江
余婧
郭廷源
田贺祥
张强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Spacecraft System Engineering
Original Assignee
Beijing Institute of Spacecraft System Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Spacecraft System Engineering filed Critical Beijing Institute of Spacecraft System Engineering
Priority to CN201610052803.0A priority Critical patent/CN105547258B/en
Publication of CN105547258A publication Critical patent/CN105547258A/en
Application granted granted Critical
Publication of CN105547258B publication Critical patent/CN105547258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • G01C11/025Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures by scanning the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Manufacturing & Machinery (AREA)
  • Studio Devices (AREA)

Abstract

Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars of the present invention, is concretely comprised the following steps:1) according to camera on satellite installation site, every CCD is calculated from camera coordinates system to the transition matrix of satellite body coordinate system, and using the matrix as constant PkIn the internal memory for storing spaceborne computer;2) the 1st group of the 1st CCD time of integration is calculated according to satellite orbit data, attitude of satellite data and camera parameter data;3) characteristic in the presence of many identical intermediate quantities is calculated using the adjacent C CD times of integration, the intermediate quantity that the 1st group of the 1st CCD time of integration calculates is preserved.4) the constant P of step 1) is utilizedkThe results of intermediate calculations optimization obtained with step 3) calculates the 1st group of remaining 1 CCD of the M time of integration.5) return to step 2), repeat step 2) step 4) is arrived, calculate remaining 1 group of N integral time value.

Description

Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars
Technical field
The present invention relates to the computational methods of the Optical remote satellite TDICCD camera parameter times of integration, particular for in-orbit light The On board computer for learning remote sensing satellite carries out the optimization implementation method of real-time time of integration calculating.
Background technology
The time of integration is a key parameter of Optical Imaging Satellite TDICCD cameras, at present Optical Imaging Satellite camera one As all use TDI-CCD linear array push-scanning images, imaging process needs the transfer of photogenerated charge bag to be kept with the motion of image planes image Synchronous, i.e., the picture in the time of integration on camera focal plane moves should be identical with the length of the single pixel of camera, if the time of integration calculates Inaccuracy, can cause motion-blurred, and ssystem transfer function MTF declines.
Conventional satellite is imaged all using imaging and passive imaging pattern (during being imaged over the ground because gesture stability ability is weaker The axis angular rate of satellite three is that zero), the time of integration varies less in imaging process, therefore time of integration calculating frequency requirement is relatively low (calculating once within general 1 second).With the enhancing of China's remote sensing satellite attitude control maneuverability, new agile satellite imagery process appearance State change is flexible, and three direction of principal axis can change, and three-axis attitude angular speed can be not zero.Satellite is being carried out " in dynamic During imaging ", the acute variation of optical axis oblique distance not only brings the change of image scale, and can cause camera image planes image motion The change of angular speed, and then cause acute variation of the camera integration time parameter in imaging process, therefore the time of integration calculates Frequency requirement is higher (general 125 milliseconds require to calculate once), due to needing to consider three-axis attitude angular speed in calculating process, because This computation complexity is higher, and as the further raising of quick Satellite Camera resolution ratio, camera CCD piece numbers are defended compared to tradition Star improves several times, and every CCD, which is required, individually calculates the time of integration, so quick Satellite Camera time of integration amount of calculation is compared Conventional satellite adds several times.The quick satellite time of integration, which calculates, as fully visible compares conventional satellite, calculates frequency, calculates complexity Degree and calculation times all have a distinct increment, but spaceborne computer ability is weaker at present, if still using traditional computational methods, It will be unable to realize and calculated in real time on the star of the time of integration.
The content of the invention
Present invention solves the technical problem that it is:The technical problem to be solved in the present invention is to provide a kind of quick satellite TDICCD The optimization implementation method that the optical camera time of integration calculates, can be expired using this method under very limited amount of spaceborne design conditions The requirement of real-time that the sufficient time of integration calculates, reliable Data safeguard is provided for quick Satellite Camera image quality.
The technical scheme is that:Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars, step It is rapid as follows:
(1) setting Polaroid task needs to calculate the N group times of integration, every group of time of integration for needing to calculate M pieces CCD;Root According to camera on satellite installation site, every CCD is calculated from camera coordinates system to the transition matrix of satellite body coordinate system, And using the matrix as constant PkIn the internal memory for storing spaceborne computer, wherein k=1.....M;
(2) the 1st group of the 1st CCD is calculated according to satellite orbit data, attitude of satellite data and camera parameter data The time of integration;
(3) characteristic in the presence of many identical intermediate quantities is calculated using the adjacent C CD times of integration, the 1st group of the 1st CCD is accumulated The intermediate quantity calculated between timesharing is preserved.
(4) the constant P of step (1) is utilizedkThe 1st group of the results of intermediate calculations optimization calculating obtained with step (3) is remaining The M-1 pieces CCD time of integration.
(5) return to step (2), repeat step (2) arrive step (4), calculate the integral time value of remaining N-1 groups.
It is as follows the step of the 1st group of the 1st CCD of the calculating time of integration in step (2):
(21) according to T during the 1st group of star calculate solid system and the transition matrix between inertial system:
Wherein ζA,zAAFor precession of the equinoxes parameter;For Greenwich mean sidereal time (GMST);LECI,ECFIt is to inertial system transition matrix admittedly for ground;LECF,ECIFor inertial system to solid system turn Matrix is changed, its value is equal to LECI,ECFTransposition;
ζA,zAA,The computational methods of four parameters are:
WhereinJulian date corresponding to T when JD (T) is Satellite, JD (J2000.0)= 2451545.0 it is Julian date corresponding to epoch J2000.0;
(22) position S of the satellite under inertial system is calculated according to orbital positionECIWith track system to inertial system transition matrix LECI,O
The method for calculating position under satellite inertial system is as follows:
WhereinAway from, a it is terrestrial equator radius for the earth's core of satellite, f is the true anomaly of satellite orbit, E is eccentricity, and u=ω+f are latitude argument, and w is perigee angle, and Ω is the right ascension of ascending node of satellite orbit, and i is orbit inclination angle; (xi, yi, zi) it is three axis components of the satellite in the position of inertial system;
The method for calculating track system to inertial system transition matrix is as follows:
(23) satellite body system is calculated to track system transition matrix L according to attitude dataO,B,
Computational methods are as follows:
WhereinFor roll angle, θ is the angle of pitch, and ψ is yaw angle;Satellite turns sequence using 1-2-3 posture;
(24) the position S under satellite is admittedly on ground is calculated according to the result of (21) and (22)ECF
SECF=LECF,ECISECI
(25) the constant P in (21), (22), the result of (23) and step (1)kCalculate the optical axis vector under the solid system in ground VECF
VECF=LECF,ECILECI,OLO,BP1Vc
Wherein:VcFor the optical axis vector under camera system, Vc={ 0,0,1 };
(26) photography point position is calculated according to the result of (24), (25);The photography point is the friendship of optical axis vector and the earth Point, obtained according to the space line equation of camera optical axis vector with earth ellipsoid equations simultaneousness:
It is position of the point in body-fixed coordinate system of photographing to solve equation and obtain x, y, z;For SECFThree axis components;For VECFThree axis components;C is earth polar radius;
(27) according to the calculating star oblique distance h of the geographical altitude data that stores on (24), the result of (26) and star:
Wherein | SP ' | it is revised oblique distance h;Wherein SO is distance of the satellite to the earth's core, and SP is satellite to plan photography point Distance, OP be the earth's core to intend photography point distance, SP' be satellite to photography point distance, OP' be the earth's core to photograph away from From;
(28) photography point ground velocity is calculated according to the result of (22) and (23), the constant in step (1) and attitude data;According to Rational mechanics principle, relative velocity subtract the velocity of following equal to absolute velocity, and photography point ground velocity computational methods are as follows:
V=ωe×R-[ωn×r+(ωns)×H+vr]=ωe×R-ωn×R-ωs×H-vr
vrIt is the radial component of satellite absolute velocity, ωeIt is earth rate vector, R is vector of the earth's core to target point, H It is distance vector of the satellite to target point, r is the radius vector that satellite is pointed in the earth's core, and its size is r, ωnIt is orbit angular velocity vector, Its size isP=a (1-e2),μ is Gravitational coefficient of the Earth, and p is semi-focal chord of satellite orbit, and e is inclined Heart rate, f are true anomalies;ωsRepresent the attitude angular velocity vector of satellite;
(29) according to the result of (27) and (28), the pixel dimension constant D and camera focus constant F of camera, it is calculated 1st group of the 1st CCD time of integration.Computational methods are as follows:
11 results of intermediate calculations in the 1st group of the 1st CCD time of integration calculating process, specific bag are preserved in step (3) Admittedly it is coordinate position Mid_S with including satelliteECF, body series to track system transition matrix Mid_LO,B, track system to inertial system change Matrix Mid_LECI,O, inertial system to ground be transition matrix Mid_L admittedlyECF,ECI, track system to body series transition matrix Mid_LB,O、 Satellite absolute velocity radial vector Mid_v under track systemro, satellite orbit angular velocity vector Mid_w under track systemno, under track system Rotational-angular velocity of the earth vector Mid_weo, the space vector Mid_r of satellite is pointed in the earth's core under track systemo, Satellite Attitude under body series State angular velocity vector Mid_wsb, distance vector Mid_H of the satellite to target point under camera systemc
The results of intermediate calculations that step (4) is obtained using the constant and step (3) of step (1) calculates the 1st group of remaining M-1 The piece CCD time of integration comprises the following steps that:
(41) constant and results of intermediate calculations Mid_L of step (1) are utilizedO,B,Mid_LECI,O,Mid_LECF,ECI, calculate ground Gu the optical axis vector C under systemF
CF=Mid_LECF,ECIMid_LECI,OMid_LO,BPkCc
Wherein it is CcOptical axis vector under camera system;
(42) according to optical axis vector results of intermediate calculations Mid_SECF, the intersection point of the optical axis and the earth is calculated, that is, is photographed a little, Computational methods are by the space line equation of camera optical axis vector and earth ellipsoid equations simultaneousness, solve equation to obtain position of intersecting point;
Admittedly it is coordinate Mid_S satellite thereinECFThree axis components are respectivelyGround is consolidated It is optical axis vector CFThree axis components are respectivelyA is terrestrial equator radius, and c is earth polar radius;
(43) oblique distance is calculated according to photography point and earth's surface digital elevation data.
(44) constant and intermediate result Mid_L obtained according to step (1)B,O、Mid_wno、Mid_weo、Mid_ro、Mid_ wsb、Mid_Hc、Mid_LO,BThe speed of the relative camera of photography point is calculated, computational methods are:
vb=Mid_LB,O[(Mid_weo-Mid_wno)Ro-Mid_vro]-Mid_wsb(PkMid_Hc);
Wherein Ro=Mid_ro+Mid_LO,BPk(Mid_Hc) represent that the space vector that photography is put is pointed in the earth's core under track system;
(45) the photography point ground velocity that the oblique distance and step (44) obtained according to step (43) obtains calculates the time of integration.
The present invention compared with prior art the advantages of be:.
1) optimization of single CCD time of integration single calculation
Time of integration single meter is reduced using the expansion of the sine and cosine precomputation of angle, dimensional normalization, matrix multiplication The intensity of calculation.
2) optimization that T-phase neighbour calculates the CCD times of integration during same star
This process calculates the characteristic in the presence of many identical intermediate quantities using the adjacent C CD times of integration, and first CCD is integrated The intermediate quantity that time calculates is preserved, and remaining adjacent C of T CD all carrys out speed-up computation process using these intermediate quantities during same star
3) optimization that the time of integration calculates during different stars
Because camera CCD installation deviation is completely fixed within the whole lifetime of satellite phase, so this method make use of this special Property, the transition matrix of camera system to satellite body system is calculated in advance, result of calculation is stored in spaceborne memory, Without calculating when all running into the calculation procedure of " camera system to satellite body system transition matrix " in time of integration calculating process, but The matrix data in memory is directly read, effectively reduces the amount of calculation of satellite-borne processor.
Brief description of the drawings
Fig. 1 is the flow for optimizing calculating on camera integration time star.
Fig. 2 is time of integration computational geometry schematic diagram.
Fig. 3 is calculation process on time of integration star.
Fig. 4 is burst corner schematic diagram.
Fig. 5 is every group of the 1st time of integration calculation process.
Fig. 6 is to consider the camera shooting geometrical relationship after geographical elevation.
Fig. 7 is the adjacent C CD times of integration to optimize the flow calculated.
Embodiment
The time of integration is defined as camera pixel dimension and the ratio as speed.As shown in Figure 2.
Time of integration calculation formula is as follows:
Wherein, vxFor ground scenery, movement velocity, h are the distance (abbreviation that the barycenter of satellite is put to photography in focal plane Oblique distance), v is photography point ground velocity, and F is the focal length of camera, and D is pixel dimension.Focal length and pixel dimension for fixed camera all It is known quantity, therefore the core that the time of integration calculates is to obtain oblique distance h and photography point ground velocity v ratio
The time of integration is an important parameter of Satellite Camera imaging, therefore the time of integration, which calculates, to be once imaged on star Uninterruptedly completed in task process.Calculation process on time of integration star is as shown in figure 3, T in figureBFor opening for Polaroid task Begin the time, TEFor the end time, in [TB,TE] calculating of one group of time of integration is carried out in the period every △ T, N groups (N is calculated altogether By imaging duration and the time of integration frequency is calculated to determine) time of integration, every group need calculating M pieces (M by camera on star CCD pieces Number determines) CCD time of integration.
The present invention proposes the optimization implementation method calculated a kind of remote sensing satellite TDICCD optical camera times of integration, and it is counted Flow is calculated as shown in figure 1, comprising the following steps that:
(1) according to every CCD of camera installation sites on satellite, every CCD camera coordinates system is calculated to satellite body The transition matrix of coordinate system, and stored the matrix as constant into the internal memory of spaceborne computer, the subsequent integration time calculates During when being directed to use with the matrix, the internal memory constant is directly quoted, without computing repeatedly.
Comprise the following steps that:
(11) according to the 1st CCD of camera mounting characteristic value (rx,ry,rz), (characteristic value refers to camera features point in satellite The position coordinates of body coordinate system) burst corner α, β are calculated, Fig. 4 is the schematic diagram of burst corner.
Wherein
(12) satellite body system is calculated to camera system transition matrix L according to burst cornerC,B
LC,B=Ly(β)·Lx(α)
(13) constant P is used on star1Access matrix LC,BTransposition value (i.e. camera coordinates system to satellite body coordinate system Transition matrix).
(14) circulation step (11)~(13), remaining M-1 pieces CCD camera coordinates system is calculated to satellite body coordinate system Transition matrix, and using constant PkKth piece CCD camera coordinates system is stored to the transition matrix of satellite body coordinate system.(wherein K=1.....M)
(2) the 1st group the 1st is calculated according to satellite orbit data and attitude of satellite data and camera parameter are data-optimized The CCD time of integration.Calculation process is as shown in Figure 5.Specific calculation procedure is as follows:
(21) according to T during the 1st group of star calculate solid system and the transition matrix between inertial system:
Wherein ζA,zAAFor precession of the equinoxes parameter;For Greenwich mean sidereal time (GMST);LECI,ECFIt is to inertial system transition matrix admittedly for ground;LECF,ECIFor inertial system to solid system turn Matrix is changed, its value is equal to LECI,ECFTransposition;
ζA,zAA,The computational methods of four parameters are:
T is the time interval from the J2000.0 Julian dates started at,Wherein JD (T) is Julian date corresponding to T during Satellite, JD (J2000.0)=2451545.0 are Julian dates corresponding to epoch J2000.0.
(22) position S of the satellite under inertial system is calculated according to orbital positionECIWith track system to inertial system transition matrix LECI,O
The method for calculating position under satellite inertial system is as follows:
WhereinAway from, a it is terrestrial equator radius for the earth's core of satellite, f is the true anomaly of satellite orbit, E is eccentricity, and w is perigee angle, and u=ω+f are latitude argument, and Ω is the right ascension of ascending node of satellite orbit, and i is orbit inclination angle; (xi, yi, zi) it is three axis components of the satellite in the position of inertial system;
The method for calculating track system to inertial system transition matrix is as follows:
(23) satellite body system is calculated to track system transition matrix L according to attitude dataO,B
Computational methods are as follows:
WhereinFor roll angle, θ is the angle of pitch, and ψ is yaw angle;Satellite turns sequence using 1-2-3 posture;
(24) the position S under satellite is admittedly on ground is calculated according to the result of (21) and (22)ECF;Computational methods are as follows:
SECF=LECF,ECISECI
(25) the constant P in (21), (22), the result of (23) and step (1)kCalculate the optical axis vector under the solid system in ground VECF;Computational methods are as follows:
VECF=LECF,ECILECI,OLO,BP1Vc
Wherein:VcFor the optical axis vector under camera system, Vc={ 0,0,1 }
(26) photography point position is calculated according to the result of (24), (25);
Photography point is optical axis vector and the intersection point of the earth, according to the space line equation and earth ellipsoid of camera optical axis vector Equations simultaneousness obtains:
It is position of the point in body-fixed coordinate system of photographing to solve equation and obtain x, y, z;For SECFThree axis components;For VECFThree axis components;A is terrestrial equator radius, and c is earth pole half Footpath;
(27) according to the calculating star oblique distance h of the geographical altitude data that stores on (24), the result of (26) and star;Specific mistake Journey is as follows:
Point is photographed a bit, not consider earth altitude data with the earth surface of earth movements according to the camera optical axis In the case of, after the α angles of geographical altitude data are not considered, it is re-introduced into geographical altitude data and relevant parameter is modified, As shown in fig. 6, after in view of altitude data, the photography point of the camera optical axis is changed into P ', point S, P and P ' three point on a straight line from P points.Figure Middle OP is the earth local radius for not considering geographical altitude data, and after OP ' is considers geographical altitude data, photography point is repaiied Earth local radius after just, its numerical value are that OP size adds geographical altitude data.
Wherein | SP ' | it is revised oblique distance h;Wherein SO is distance of the satellite to the earth's core, and SP is satellite to plan photography point Distance, OP be the earth's core to intend photography point distance, SP' be satellite to photography point distance, OP' be the earth's core to photograph away from From;
(28) photography point ground velocity is calculated according to the result of (22) and (23), the constant in step (1) and attitude data;According to Rational mechanics principle, relative velocity subtract the velocity of following equal to absolute velocity, so photography point ground velocity computational methods are as follows:
V=ωe×R-[ωn×r+(ωns)×H+vr]=ωe×R-ωn×R-ωs×H-vr
vrIt is the radial component of satellite absolute velocity, ωeIt is earth rate vector, R is vector of the earth's core to target point, H It is distance vector of the satellite to target point, r is the radius vector that satellite is pointed in the earth's core, and its size is r, ωnIt is orbit angular velocity vector, Its size isP=a (1-e2),μ is Gravitational coefficient of the Earth, and p is semi-focal chord of satellite orbit, and e is inclined Heart rate, f are true anomalies;ωsRepresent the attitude angular velocity vector of satellite;
(29) according to the result of (27) and (28), the pixel dimension constant D and camera focus constant F of camera, it is calculated 1st group of the 1st CCD time of integration.Computational methods are as follows:
It is related to the sine and cosine meter at substantial amounts of attitude angle and track angle in this 9 step calculating process in whole step (2) To calculate, these calculating are required for calling s in, cos functions in bottom math library to calculate in On-board software implementation process, these Function call process is more time-consuming, and this time of integration is calculated the attitude angle used and track angle sine, cosine meter by this method Calculate in attitude data structure and orbital data structure corresponding to precalculated be stored in, directly using pre- when subsequently using The result first calculated, effectively reduces the process of computing repeatedly.Due to the attitude angle of single time of integration calculating process Satellite All it is constant with track angle, so this method precalculated is completely applicable.
(3) 11 results of intermediate calculations in the 1st group of the 1st CCD time of integration calculating process are preserved,
Results of intermediate calculations (as shown in the table) in step (2) calculating process is preserved, when being calculated for adjacent C CD Use.
The identical intermediate quantity that the adjacent C CD times of integration of table 1 calculate
(4) the 1st group of residue M-1 pieces CCD time of integration is calculated
This process calculates the characteristic in the presence of many identical intermediate quantities using the adjacent C CD times of integration, and first CCD is integrated The intermediate quantity that time calculates is preserved, and remaining adjacent C of T CD all carrys out speed-up computation mistake using these intermediate quantities during same star Journey, calculation process are as shown in Figure 7.
(41) optical axis vector under the solid system in ground is calculated using results of intermediate calculations
Utilize the constant and step (3) results of intermediate calculations Mid_L using step (1)O,B,Mid_LECI,O,Mid_LECF,ECI Calculate the optical axis vector V under the solid system in groundECF, computational methods are as follows:
CF=Mid_LECF,ECIMid_LECI,OMid_LO,BPkCc
(42) photography point is calculated using results of intermediate calculations
According to optical axis vector VECFWith results of intermediate calculations Mid_SECF(being coordinate position admittedly), calculate the optical axis and ground satellite The intersection point (photographing a little) of ball, computational methods be by the space line equation of camera optical axis vector and earth ellipsoid equations simultaneousness, Solve equation to obtain position of intersecting point.
Admittedly it is coordinate Mid_S satellite thereinECFThree axis components are respectively
(43) oblique distance is calculated according to photography point and earth's surface digital elevation data.The same step of detailed process (27).
(44) photography point ground velocity is calculated
According to intermediate result Mid_LB,O、Mid_wno、Mid_weo、Mid_ro、Mid_wsb、Mid_Hc、Mid_LO,BCalculating is taken the photograph Shadow point ground velocity, computational methods are as follows.
vb=Mid_LB,O[(Mid_weo-Mid_wno)Ro-Mid_vro]-Mid_wsb(PkMid_Hc)
Wherein Ro=Mid_ro+Mid_LO,BPk(Mid_Hc), represent that the space vector that photography is put is pointed in the earth's core under track system.
(45) the T times of integration at moment are calculated
The time of integration is calculated according to the photography of (43) oblique distance and (44) point ground velocity.
(5) time of integration of remaining N-1 groups is calculated
Return to step (2), repeat step (2) arrive step (4), calculate the integral time value of remaining N-1 groups (every group of M piece).

Claims (4)

1. optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars, it is characterised in that step is as follows:
(1) setting Polaroid task needs to calculate the N group times of integration, every group of time of integration for needing to calculate M pieces CCD;According to phase Machine installation site on satellite, every CCD is calculated from camera coordinates system to the transition matrix of satellite body coordinate system, and will The matrix is as constant PkIn the internal memory for storing spaceborne computer, wherein k=1.....M;
(2) the 1st group of the 1st CCD product is calculated according to satellite orbit data, attitude of satellite data and camera parameter data Between timesharing;
(3) characteristic in the presence of many identical intermediate quantities is calculated using the adjacent C CD times of integration, when the 1st group of the 1st CCD is integrated Between the intermediate quantity that calculates preserved;
(4) the constant P of step (1) is utilizedkThe results of intermediate calculations optimization obtained with step (3) calculates the 1st group of remaining M-1 piece The CCD time of integration;
(5) return to step (2), repeat step (2) arrive step (4), calculate the integral time value of remaining N-1 groups.
2. optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars according to claim 1, it is special Sign is:It is as follows the step of the 1st group of the 1st CCD of the calculating time of integration in step (2):
(21) according to T during the 1st group of star calculate solid system and the transition matrix between inertial system:
<mrow> <msub> <mi>L</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>I</mi> <mo>,</mo> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>R</mi> <mi>z</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>&amp;zeta;</mi> <mi>A</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>R</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>&amp;theta;</mi> <mi>A</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>R</mi> <mi>z</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>A</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>R</mi> <mi>z</mi> </msub> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mover> <mi>S</mi> <mo>&amp;OverBar;</mo> </mover> <mi>G</mi> </msub> <mo>)</mo> </mrow> </mrow>
Wherein ζA,zAAFor precession of the equinoxes parameter;For Greenwich mean sidereal time (GMST);LECI,ECFIt is to inertial system transition matrix admittedly for ground;LECF,ECIFor inertial system to solid system turn Matrix is changed, its value is equal to LECI,ECFTransposition;
ζA,zAA,The computational methods of four parameters are:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>&amp;zeta;</mi> <mi>A</mi> </msub> <mo>=</mo> <msup> <mn>2306</mn> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mn>.2181</mn> <mi>t</mi> <mo>+</mo> <msup> <mn>0</mn> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mn>.30188</mn> <msup> <mi>t</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mn>0</mn> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mn>.017998</mn> <msup> <mi>t</mi> <mn>3</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>z</mi> <mi>A</mi> </msub> <mo>=</mo> <msup> <mn>2306</mn> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mn>.2181</mn> <mi>t</mi> <mo>+</mo> <msup> <mn>1</mn> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mn>.09468</mn> <msup> <mi>t</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mn>0</mn> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mn>.018203</mn> <msup> <mi>t</mi> <mn>3</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>&amp;theta;</mi> <mi>A</mi> </msub> <mo>=</mo> <msup> <mn>2004</mn> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mn>.3109</mn> <mi>t</mi> <mo>-</mo> <msup> <mn>0</mn> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mn>.42665</mn> <msup> <mi>t</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mn>0</mn> <mrow> <mo>&amp;prime;</mo> <mo>&amp;prime;</mo> </mrow> </msup> <mn>.041833</mn> <msup> <mi>t</mi> <mn>3</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mover> <mi>S</mi> <mo>&amp;OverBar;</mo> </mover> <mi>G</mi> </msub> <mo>=</mo> <msup> <mn>18</mn> <mi>h</mi> </msup> <mn>.6973746</mn> <mo>+</mo> <msup> <mn>879000</mn> <mi>h</mi> </msup> <mn>.0513367</mn> <mi>t</mi> <mo>+</mo> <msup> <mn>0</mn> <mi>s</mi> </msup> <mn>.093104</mn> <msup> <mi>t</mi> <mn>2</mn> </msup> <mo>-</mo> <msup> <mn>6</mn> <mi>s</mi> </msup> <mn>.2</mn> <mo>&amp;times;</mo> <msup> <mn>10</mn> <mrow> <mo>-</mo> <mn>6</mn> </mrow> </msup> <msup> <mi>t</mi> <mn>3</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
WhereinJulian date corresponding to T when JD (T) is Satellite, JD (J2000.0)= 2451545.0 it is Julian date corresponding to epoch J2000.0;
(22) position S of the satellite under inertial system is calculated according to orbital positionECIWith track system to inertial system transition matrix LECI,O
The method for calculating position under satellite inertial system is as follows:
<mrow> <msub> <mi>S</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>r</mi> <mrow> <mo>(</mo> <mi>cos</mi> <mi> </mi> <mi>u</mi> <mi> </mi> <mi>cos</mi> <mi>&amp;Omega;</mi> <mo>-</mo> <mi>sin</mi> <mi> </mi> <mi>u</mi> <mi> </mi> <mi>cos</mi> <mi> </mi> <mi>i</mi> <mi> </mi> <mi>sin</mi> <mi>&amp;Omega;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>r</mi> <mrow> <mo>(</mo> <mi>cos</mi> <mi> </mi> <mi>u</mi> <mi> </mi> <mi>sin</mi> <mi>&amp;Omega;</mi> <mo>+</mo> <mi>sin</mi> <mi> </mi> <mi>u</mi> <mi> </mi> <mi>cos</mi> <mi> </mi> <mi>i</mi> <mi> </mi> <mi>cos</mi> <mi>&amp;Omega;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>r</mi> <mi> </mi> <mi>sin</mi> <mi> </mi> <mi>u</mi> <mi> </mi> <mi>sin</mi> <mi> </mi> <mi>i</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
WhereinAway from, a it is terrestrial equator radius for the earth's core of satellite, f is the true anomaly of satellite orbit, and e is inclined Heart rate, u=ω+f are latitude argument, and ω is perigee angle, and Ω is the right ascension of ascending node of satellite orbit, and i is orbit inclination angle;(xi, yi, zi) it is three axis components of the satellite in the position of inertial system;
The method for calculating track system to inertial system transition matrix is as follows:
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>I</mi> <mo>,</mo> <mi>O</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>L</mi> <mi>z</mi> </msub> <mrow> <mo>(</mo> <mo>-</mo> <mi>&amp;Omega;</mi> <mo>)</mo> </mrow> <msub> <mi>L</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;pi;</mi> <mo>/</mo> <mn>2</mn> <mo>-</mo> <mi>i</mi> <mo>)</mo> </mrow> <msub> <mi>L</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;pi;</mi> <mo>/</mo> <mn>2</mn> <mo>+</mo> <mi>u</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mi>cos</mi> <mi>&amp;Omega;</mi> <mi>sin</mi> <mi> </mi> <mi>u</mi> <mo>-</mo> <mi>sin</mi> <mi>&amp;Omega;</mi> <mi>cos</mi> <mi> </mi> <mi>i</mi> <mi> </mi> <mi>cos</mi> <mi> </mi> <mi>u</mi> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mi>sin</mi> <mi>&amp;Omega;</mi> <mi>sin</mi> <mi> </mi> <mi>i</mi> </mrow> </mtd> <mtd> <mrow> <mi>sin</mi> <mi>&amp;Omega;</mi> <mi>cos</mi> <mi> </mi> <mi>i</mi> <mi> </mi> <mi>sin</mi> <mi> </mi> <mi>u</mi> <mo>-</mo> <mi>cos</mi> <mi>&amp;Omega;</mi> <mi>cos</mi> <mi> </mi> <mi>u</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>cos</mi> <mi>&amp;Omega;</mi> <mi>cos</mi> <mi> </mi> <mi>i</mi> <mi> </mi> <mi>cos</mi> <mi> </mi> <mi>u</mi> <mo>-</mo> <mi>sin</mi> <mi>&amp;Omega;</mi> <mi>sin</mi> <mi> </mi> <mi>u</mi> </mrow> </mtd> <mtd> <mrow> <mi>cos</mi> <mi>&amp;Omega;</mi> <mi>sin</mi> <mi> </mi> <mi>i</mi> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mi>sin</mi> <mi>&amp;Omega;</mi> <mi>cos</mi> <mi> </mi> <mi>u</mi> <mo>-</mo> <mi>cos</mi> <mi>&amp;Omega;</mi> <mi>cos</mi> <mi> </mi> <mi>i</mi> <mi> </mi> <mi>sin</mi> <mi> </mi> <mi>u</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>cos</mi> <mi> </mi> <mi>u</mi> <mi> </mi> <mi>sin</mi> <mi> </mi> <mi>i</mi> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mi>cos</mi> <mi> </mi> <mi>i</mi> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mi>sin</mi> <mi> </mi> <mi>i</mi> <mi> </mi> <mi>sin</mi> <mi> </mi> <mi>u</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> </mtable> <mo>;</mo> </mrow>
(23) satellite body system is calculated to track system transition matrix L according to attitude dataO,B,
Computational methods are as follows:
WhereinFor roll angle, θ is the angle of pitch, and ψ is yaw angle;Satellite turns sequence using 1-2-3 posture;
(24) the position S under satellite is admittedly on ground is calculated according to the result of step (21) and step (22)ECF
SECF=LECF,ECISECI
(25) the constant P in step (21), step (22), the result of step (23) and step (1)kCalculate under the solid system in ground Optical axis vector VECF
VECF=LECF,ECILECI,OLO,BP1Vc
Wherein:VcFor the optical axis vector under camera system, Vc={ 0,0,1 };
(26) photography point position is calculated according to step (24), the result of step (25);The photography point is optical axis vector and the earth Intersection point, obtained according to the space line equation of camera optical axis vector with earth ellipsoid equations simultaneousness:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <msub> <mi>X</mi> <msub> <mi>S</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> </msub> </mrow> <msub> <mi>X</mi> <msub> <mi>V</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <msub> <mi>Y</mi> <msub> <mi>S</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> </msub> </mrow> <msub> <mi>Y</mi> <msub> <mi>V</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>z</mi> <mo>-</mo> <msub> <mi>Z</mi> <msub> <mi>S</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> </msub> </mrow> <msub> <mi>Z</mi> <msub> <mi>V</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> </msub> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> <msup> <mi>a</mi> <mn>2</mn> </msup> </mfrac> <mo>+</mo> <mfrac> <msup> <mi>z</mi> <mn>2</mn> </msup> <msup> <mi>c</mi> <mn>2</mn> </msup> </mfrac> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
It is position of the point in body-fixed coordinate system of photographing to solve equation and obtain x, y, z;For SECF's Three axis components;For VECFThree axis components;C is earth polar radius;A is terrestrial equator radius;
(27) according to the geographical altitude data that is stored on step (24), the result of step (26) and star with calculating star oblique distance h:
<mrow> <mi>h</mi> <mo>=</mo> <mrow> <mo>|</mo> <mrow> <msup> <mi>SP</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mo>|</mo> </mrow> <mo>=</mo> <mrow> <mo>|</mo> <mrow> <mi>S</mi> <mi>O</mi> </mrow> <mo>|</mo> </mrow> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;alpha;</mi> <mo>+</mo> <msqrt> <mrow> <msup> <mrow> <mo>|</mo> <mrow> <mi>S</mi> <mi>O</mi> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mi>cos</mi> <mn>2</mn> </msup> <mi>&amp;alpha;</mi> <mo>-</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>S</mi> <mi>O</mi> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>|</mo> <mrow> <msup> <mi>OP</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>,</mo> </mrow>
<mrow> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;alpha;</mi> <mo>=</mo> <mfrac> <mrow> <msup> <mrow> <mo>|</mo> <mrow> <mi>S</mi> <mi>O</mi> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>S</mi> <mi>P</mi> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>O</mi> <mi>P</mi> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <mrow> <mo>|</mo> <mrow> <mi>S</mi> <mi>O</mi> </mrow> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mrow> <mi>S</mi> <mi>P</mi> </mrow> <mo>|</mo> </mrow> </mrow> </mfrac> <mo>;</mo> </mrow>
Wherein | SP ' | it is revised oblique distance h;Wherein SO be satellite arrive the earth's core distance, SP be satellite to intend photograph away from From OP is distance of the earth's core to plan photography point, and SP' is distance of the satellite to photography, and OP' is distance of the earth's core to photography point;
(28) constant in the result of step (22) and step (23), step (1) and attitude data calculate photography point ground velocity; According to rational mechanics principle, relative velocity subtracts the velocity of following equal to absolute velocity, and photography point ground velocity computational methods are as follows:
V=ωe×R-[ωn×r+(ωns)×H+vr]=ωe×R-ωn×R-ωs×H-vr
vrIt is the radial component of satellite absolute velocity, ωeIt is earth rate vector, R is vector of the earth's core to target point, and H is to defend Star is to the distance vector of target point, and r is the radius vector that satellite is pointed in the earth's core, and its size is r, ωnIt is orbit angular velocity vector, its is big It is small to beP=a (1-e2),μ is Gravitational coefficient of the Earth, and p is semi-focal chord of satellite orbit, and e is eccentricity, F is true anomaly;ωsThe attitude angular velocity vector of satellite is represented, a is terrestrial equator radius;
(29) according to the result of step (27) and step (28), the pixel dimension constant D and camera focus constant F of camera, calculate Obtain the 1st group of the 1st CCD time of integration;Computational methods are as follows:
<mrow> <mi>t</mi> <mo>=</mo> <mfrac> <mi>D</mi> <mi>F</mi> </mfrac> <mo>&amp;times;</mo> <mfrac> <mi>h</mi> <mi>v</mi> </mfrac> <mo>.</mo> </mrow>
3. optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars according to claim 1, it is special Sign is:11 results of intermediate calculations in the 1st group of the 1st CCD time of integration calculating process, specific bag are preserved in step (3) Admittedly it is coordinate position Mid_S with including satelliteECF, body series to track system transition matrix Mid_LO,B, track system to inertial system change Matrix Mid_LECI,O, inertial system to ground be transition matrix Mid_L admittedlyECF,ECI, track system to body series transition matrix Mid_LB,O、 Satellite absolute velocity radial vector Mid_v under track systemro, satellite orbit angular velocity vector Mid_w under track systemno, under track system Rotational-angular velocity of the earth vector Mid_weo, the space vector Mid_r of satellite is pointed in the earth's core under track systemo, Satellite Attitude under body series State angular velocity vector Mid_wsb, distance vector Mid_H of the satellite to target point under camera systemc
4. optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars according to claim 3, it is special Sign is:The results of intermediate calculations that step (4) is obtained using the constant and step (3) of step (1) calculates the 1st group of remaining M-1 The piece CCD time of integration comprises the following steps that:
(41) constant and results of intermediate calculations Mid_L of step (1) are utilizedO,B,Mid_LECI,O,Mid_LECF,ECI, calculate the solid system in ground Under optical axis vector CF
CF=Mid_LECF,ECIMid_LECI,OMid_LO,BPkCc
Wherein it is CcOptical axis vector under camera system;
(42) according to optical axis vector results of intermediate calculations Mid_SECF, the intersection point of the optical axis and the earth is calculated, that is, photographs a little, calculates Method is by the space line equation of camera optical axis vector and earth ellipsoid equations simultaneousness, solves equation to obtain position of intersecting point;
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <msub> <mi>X</mi> <mrow> <mi>M</mi> <mi>i</mi> <mi>d</mi> <mo>_</mo> <msub> <mi>S</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> </mrow> </msub> </mrow> <msub> <mi>X</mi> <msub> <mi>C</mi> <mi>F</mi> </msub> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <msub> <mi>Y</mi> <mrow> <mi>M</mi> <mi>i</mi> <mi>d</mi> <mo>_</mo> <msub> <mi>S</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> </mrow> </msub> </mrow> <msub> <mi>Y</mi> <msub> <mi>C</mi> <mi>F</mi> </msub> </msub> </mfrac> <mo>=</mo> <mfrac> <mrow> <mi>z</mi> <mo>-</mo> <msub> <mi>Z</mi> <mrow> <mi>M</mi> <mi>i</mi> <mi>d</mi> <mo>_</mo> <msub> <mi>S</mi> <mrow> <mi>E</mi> <mi>C</mi> <mi>F</mi> </mrow> </msub> </mrow> </msub> </mrow> <msub> <mi>Z</mi> <msub> <mi>C</mi> <mi>F</mi> </msub> </msub> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>y</mi> <mn>2</mn> </msup> </mrow> <msup> <mi>a</mi> <mn>2</mn> </msup> </mfrac> <mo>+</mo> <mfrac> <msup> <mi>z</mi> <mn>2</mn> </msup> <msup> <mi>c</mi> <mn>2</mn> </msup> </mfrac> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced>
Admittedly it is coordinate Mid_S satellite thereinECFThree axis components are respectivelyGround is the optical axis admittedly Vector CFThree axis components are respectivelyA is terrestrial equator radius, and c is earth polar radius;
(43) oblique distance is calculated according to photography point and earth's surface digital elevation data;
(44) constant and intermediate result Mid_L obtained according to step (1)B,O、Mid_wno、Mid_weo、Mid_ro、Mid_wsb、 Mid_Hc、Mid_LO,B、Mid_vroThe speed of the relative camera of photography point is calculated, computational methods are:
vb=Mid_LB,O[(Mid_weo-Mid_wno)Ro-Mid_vro]-Mid_wsb(PkMid_Hc);
Wherein Ro=Mid_ro+Mid_LO,BPk(Mid_Hc) represent that the space vector that photography is put is pointed in the earth's core under track system;
(45) the photography point ground velocity that the oblique distance and step (44) obtained according to step (43) obtains calculates the time of integration.
CN201610052803.0A 2016-01-26 2016-01-26 Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars Active CN105547258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610052803.0A CN105547258B (en) 2016-01-26 2016-01-26 Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610052803.0A CN105547258B (en) 2016-01-26 2016-01-26 Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars

Publications (2)

Publication Number Publication Date
CN105547258A CN105547258A (en) 2016-05-04
CN105547258B true CN105547258B (en) 2018-02-09

Family

ID=55826617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610052803.0A Active CN105547258B (en) 2016-01-26 2016-01-26 Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars

Country Status (1)

Country Link
CN (1) CN105547258B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106559665B (en) * 2016-10-20 2018-02-09 北京空间飞行器总体设计部 A kind of off-axis camera integration time determines method
CN107389095B (en) * 2017-07-18 2019-07-23 武汉大学 A kind of bias current corner correcting method based on overlapping pixel number deviation statistics between adjacent sheet
CN108401105B (en) * 2018-02-09 2020-08-21 中国科学院长春光学精密机械与物理研究所 Method for adjusting dynamic transfer function of space remote sensing TDICCD camera
CN110967005B (en) * 2019-12-12 2022-04-05 中国科学院长春光学精密机械与物理研究所 Imaging method and imaging system for on-orbit geometric calibration through star observation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7391925B2 (en) * 2003-12-04 2008-06-24 Lockheed Martin Missiles & Fire Control System and method for estimating noise using measurement based parametric fitting non-uniformity correction
CN100565105C (en) * 2008-02-03 2009-12-02 航天东方红卫星有限公司 A kind of star-load TDICCD camera calculates and method of adjustment integral time
CN104144304B (en) * 2014-07-04 2017-03-15 航天东方红卫星有限公司 A kind of high resolution camera difference visual field time of integration determines method
CN104581144B (en) * 2015-01-16 2016-08-24 航天东方红卫星有限公司 A kind of spaceborne linear array push is swept camera full filed and is determined method the time of integration

Also Published As

Publication number Publication date
CN105547258A (en) 2016-05-04

Similar Documents

Publication Publication Date Title
CN105547258B (en) Optimized calculation method on a kind of remote sensing satellite TDICCD camera integration time stars
CN104462776B (en) A kind of low orbit earth observation satellite is to moon absolute radiation calibration method
CN104848860A (en) Method for planning attitude maneuver in imaging process of agile satellite
CN101414003B (en) Star-loaded SAR image geocoding method based on star ground coordinate transformation
CN102168981A (en) Independent celestial navigation method for Mars capturing section of deep space probe
CN103310487B (en) A kind of universal imaging geometric model based on time variable generates method
CN104567819A (en) Method for determining and compensating full-field drift angle of space-based camera
CN109631911A (en) A kind of attitude of satellite rotation information based on deep learning Target Recognition Algorithms determines method
Qiu et al. Attitude maneuver planning of agile satellites for time delay integration imaging
CN102279001A (en) Phase shift compensation method of space-borne camera
CN103778610B (en) A kind of spaceborne line array sensor hangs down the geometry preprocess method of rail sweeping image
Alexander et al. A terrain relative navigation sensor enabled by multi-core processing
Klaasen Mercury's rotation axis and period
Petrie Some considerations regarding mapping from earth satellites
CN111127319B (en) Ground pixel resolution calculation method for push-broom imaging in motion
CN110608724A (en) Direct solving method for drift-free attitude in satellite maneuvering imaging process
Theiler et al. Automated coregistration of MTI spectral bands
Van Der Stokker An investigation into the imager pointing accuracy and stability for a CubeSat Using a CubeADCS in sun-synchronous low earth orbits
Somov et al. Verification of attitude control system for a land-survey satellite by analysis of an image motion in onboard telescope.
CN115118876B (en) Shooting parameter determining method and device and computer readable storage medium
Gutiérrez Russell Developing and testing an ultra-low-cost star tracker for attitude determinations of nanosatellites
Lee et al. An algorithm for geometric correction of high resolution image based on physical modeling
Gupta et al. Camera estimation for orbiting pushbroom imaging systems
Levin et al. The Lunar Orbiter missions to the moon
CN117478197A (en) Determination method for active push-broom imaging integration time of optical agile satellite

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant